wbia.algo.hots package
Submodules
wbia.algo.hots._pipeline_helpers module
- wbia.algo.hots._pipeline_helpers.testdata_post_sver(defaultdb='PZ_MTEST', qaid_list=None, daid_list=None, codename='vsmany', cfgdict=None)[source]
>>> from wbia.algo.hots._pipeline_helpers import * # NOQA
- wbia.algo.hots._pipeline_helpers.testdata_pre(stopnode, defaultdb='testdb1', p=['default'], a=['default:qindex=0:1,dindex=0:5'], **kwargs)[source]
New (1-1-2016) generic pipeline node testdata getter
- Parameters
stopnode (str) – name of pipeline function to be tested
defaultdb (str) – (default = u’testdb1’)
p (list) – (default = [u’default:’])
a (list) – (default = [u’default:qsize=1,dsize=4’])
**kwargs – passed to testdata_qreq_ qaid_override, daid_override
- Returns
(ibs, qreq_, args)
- Return type
- CommandLine:
python -m wbia.algo.hots._pipeline_helpers –exec-testdata_pre –show
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots._pipeline_helpers import * # NOQA >>> stopnode = 'build_chipmatches' >>> defaultdb = 'testdb1' >>> p = ['default:'] >>> a = ['default:qindex=0:1,dindex=0:5'] >>> qreq_, args = testdata_pre(stopnode, defaultdb, p, a)
- wbia.algo.hots._pipeline_helpers.testdata_pre_baselinefilter(defaultdb='testdb1', qaid_list=None, daid_list=None, codename='vsmany')[source]
- wbia.algo.hots._pipeline_helpers.testdata_pre_sver(defaultdb='PZ_MTEST', qaid_list=None, daid_list=None)[source]
>>> from wbia.algo.hots._pipeline_helpers import * # NOQA
- wbia.algo.hots._pipeline_helpers.testdata_sparse_matchinfo_nonagg(defaultdb='testdb1', p=['default'])[source]
- wbia.algo.hots._pipeline_helpers.testrun_pipeline_upto(qreq_, stop_node='end', verbose=True)[source]
Main tester function. Runs the pipeline by mirroring request_wbia_query_L0, but stops at a requested breakpoint and returns the local variables.
convinience: runs pipeline for tests this should mirror request_wbia_query_L0
- Ignore:
>>> # TODO: autogenerate >>> # The following is a stub that starts the autogeneration process >>> import utool as ut >>> from wbia.algo.hots import pipeline >>> source = ut.get_func_sourcecode(pipeline.request_wbia_query_L0, >>> strip_docstr=True, stripdef=True, >>> strip_comments=True) >>> import re >>> source = re.sub(r'^\s*$\n', '', source, flags=re.MULTILINE) >>> print(source) >>> ut.replace_between_tags(source, '', sentinal)
wbia.algo.hots.chip_match module
python -m utool.util_inspect check_module_usage –pat=”chip_match.py”
- class wbia.algo.hots.chip_match.AnnotMatch(*args, **kwargs)[source]
Bases:
wbia.algo.hots.chip_match.MatchBaseIO
,utool.util_dev.NiceRepr
,wbia.algo.hots.chip_match._BaseVisualization
,wbia.algo.hots.chip_match._AnnotMatchConvenienceGetter
This implements part the match between whole annotations and the other annotaions / names. This does not include algorithm specific feature matches.
- classmethod from_dict(class_dict, ibs=None)[source]
Convert dict of arguments back to ChipMatch object
- initialize(qaid=None, daid_list=None, score_list=None, dnid_list=None, qnid=None, unique_nids=None, name_score_list=None, annot_score_list=None, autoinit=True)[source]
qaid and daid_list are not optional. fm_list and fsv_list are strongly encouraged and will probalby break things if they are not there.
- class wbia.algo.hots.chip_match.ChipMatch(*args, **kwargs)[source]
Bases:
wbia.algo.hots.chip_match._ChipMatchVisualization
,wbia.algo.hots.chip_match.AnnotMatch
,wbia.algo.hots.chip_match._ChipMatchScorers
,wbia.algo.hots.old_chip_match._OldStyleChipMatchSimulator
,wbia.algo.hots.chip_match._ChipMatchConvenienceGetter
,wbia.algo.hots.chip_match._ChipMatchDebugger
behaves as as the ChipMatchOldTup named tuple until we completely replace the old structure
- arraycast_self()[source]
Ensures internal structure is in numpy array formats TODO: come up with better name Remove old initialize method and rename to initialize?
- classmethod combine_cms(cm_list)[source]
Example
>>> # FIXME failing-test (22-Jul-2020) This test is failing and it's not clear how to fix it >>> # xdoctest: +SKIP >>> from wbia.core_annots import * # NOQA >>> ibs, depc, aid_list = testdata_core(size=4) >>> request = depc.new_request('vsone', [1], [2, 3, 4], {'dim_size': 450}) >>> rawres_list2 = request.execute(postprocess=False) >>> cm_list = ut.take_column(rawres_list2, 1) >>> out = ChipMatch.combine_cms(cm_list) >>> out.score_name_nsum(request) >>> ut.quit_if_noshow() >>> out.ishow_analysis(request) >>> ut.show_if_requested()
- compress_top_feature_matches(num=10, rng=<module 'numpy.random' from '/Users/jason.parham/virtualenv/wbia.3.8.9/lib/python3.8/site-packages/numpy/random/__init__.py'>, use_random=True)[source]
DO NOT USE
FIXME: Use boolean lists
Removes all but the best feature matches for testing purposes rng = np.random.RandomState(0)
- extend_results(qreq_, other_aids=None)[source]
Return a new ChipMatch containing empty data for an extended set of aids
- Parameters
qreq (wbia.QueryRequest) – query request object with hyper-parameters
other_aids (None) – (default = None)
- Returns
out
- Return type
wbia.ChipMatch
- CommandLine:
python -m wbia.algo.hots.chip_match –exec-extend_results –show
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.chip_match import * # NOQA >>> import wbia >>> import wbia >>> cm, qreq_ = wbia.testdata_cm('PZ_MTEST', >>> a='default:dindex=0:10,qindex=0:1', >>> t='best:SV=False') >>> assert len(cm.daid_list) == 9 >>> cm.assert_self(qreq_) >>> other_aids = qreq_.ibs.get_valid_aids() >>> out = cm.extend_results(qreq_, other_aids) >>> assert len(out.daid_list) == 118 >>> out.assert_self(qreq_)
- classmethod from_dict(class_dict, ibs=None)[source]
Convert dict of arguments back to ChipMatch object
- classmethod from_json(json_str)[source]
Convert json string back to ChipMatch object
- CommandLine:
# FIXME: util_test is broken with classmethods python -m wbia.algo.hots.chip_match –test-from_json –show
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.chip_match import * # NOQA >>> import wbia >>> cm1, qreq_ = wbia.testdata_cm() >>> json_str = cm1.to_json() >>> cm = ChipMatch.from_json(json_str) >>> ut.quit_if_noshow() >>> cm.score_name_nsum(qreq_) >>> cm.show_single_namematch(qreq_, 1) >>> ut.show_if_requested()
- initialize(qaid=None, daid_list=None, fm_list=None, fsv_list=None, fk_list=None, score_list=None, H_list=None, fsv_col_lbls=None, dnid_list=None, qnid=None, unique_nids=None, name_score_list=None, annot_score_list=None, autoinit=True, filtnorm_aids=None, filtnorm_fxs=None)[source]
qaid and daid_list are not optional. fm_list and fsv_list are strongly encouraged and will probalby break things if they are not there.
- rrr(verbose=True, reload_module=True)
special class reloading function This function is often injected as rrr of classes
- shortlist_subset(top_aids)[source]
returns a new cmtup_old with only the requested daids TODO: rectify with take_feature_matches
- take_annots(idx_list, inplace=False, keepscores=True)[source]
Keeps results only for the selected annotation indices.
- CommandLine:
python -m wbia.algo.hots.chip_match take_annots
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.chip_match import * # NOQA >>> import wbia >>> cm, qreq_ = wbia.testdata_cm('PZ_MTEST', >>> a='default:dindex=0:10,qindex=0:1', >>> t='best:sv=False') >>> idx_list = list(range(cm.num_daids)) >>> inplace = False >>> keepscores = True >>> other = out = cm.take_annots(idx_list, inplace, keepscores) >>> result = ('out = %s' % (ut.repr2(out, nl=1),)) >>> # Because the subset was all aids in order, the output >>> # ChipMatch should be exactly the same. >>> assert cm.inspect_difference(out), 'Should be exactly equal!' >>> print(result)
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.chip_match import * # NOQA >>> import wbia >>> cm, qreq_ = wbia.testdata_cm('PZ_MTEST', >>> a='default:dindex=0:10,qindex=0:1', >>> t='best:SV=False') >>> idx_list = [0, 2] >>> inplace = False >>> keepscores = True >>> other = out = cm.take_annots(idx_list, inplace, keepscores) >>> result = ('out = %s' % (ut.repr2(out, nl=1),)) >>> print(result)
- take_feature_matches(indicies_list, inplace=False, keepscores=True)[source]
Removes outlier feature matches TODO: rectify with shortlist_subset
- Parameters
- Returns
out
- Return type
wbia.ChipMatch
- CommandLine:
python -m wbia.algo.hots.chip_match –exec-take_feature_matches –show
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.chip_match import * # NOQA >>> import wbia >>> cm, qreq_ = wbia.testdata_cm('PZ_MTEST', a='default:dindex=0:10,qindex=0:1', t='best:SV=False') >>> indicies_list = [list(range(i + 1)) for i in range(cm.num_daids)] >>> inplace = False >>> keepscores = True >>> out = cm.take_feature_matches(indicies_list, inplace, keepscores) >>> assert not cm.inspect_difference(out, verbose=False), 'should be different' >>> result = ('out = %s' % (ut.repr2(out),)) >>> print(result)
- to_json()[source]
Serialize ChipMatch object as JSON string
- CommandLine:
python -m wbia.algo.hots.chip_match –test-ChipMatch.to_json:0 python -m wbia.algo.hots.chip_match –test-ChipMatch.to_json python -m wbia.algo.hots.chip_match –test-ChipMatch.to_json:1 –show
Example
>>> # ENABLE_DOCTEST >>> # Simple doctest demonstrating the json format >>> from wbia.algo.hots.chip_match import * # NOQA >>> import wbia >>> cm, qreq_ = wbia.testdata_cm() >>> cm.compress_top_feature_matches(num=4, rng=np.random.RandomState(0)) >>> # Serialize >>> print('\n\nRaw ChipMatch JSON:\n') >>> json_str = cm.to_json() >>> print(json_str) >>> print('\n\nPretty ChipMatch JSON:\n') >>> # Pretty String Formatting >>> dictrep = ut.from_json(json_str) >>> dictrep = ut.delete_dict_keys(dictrep, [key for key, val in dictrep.items() if val is None]) >>> result = ut.repr2_json(dictrep, nl=2, precision=2, key_order_metric='strlen') >>> print(result)
Example
>>> # ENABLE_DOCTEST >>> # test to convert back and forth from json >>> from wbia.algo.hots.chip_match import * # NOQA >>> import wbia >>> cm, qreq_ = wbia.testdata_cm() >>> cm1 = cm >>> # Serialize >>> json_str = cm.to_json() >>> print(repr(json_str)) >>> # Unserialize >>> cm = ChipMatch.from_json(json_str) >>> # Show if it works >>> ut.quit_if_noshow() >>> cm.score_name_nsum(qreq_) >>> cm.show_single_namematch(qreq_, 1) >>> ut.show_if_requested() >>> # result = ('json_str = \n%s' % (str(json_str),)) >>> # print(result)
- class wbia.algo.hots.chip_match.MatchBaseIO[source]
Bases:
object
- save_to_fpath(fpath, verbose=False)[source]
- CommandLine:
python wbia –tf MatchBaseIO.save_to_fpath –verbtest –show
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.chip_match import * # NOQA >>> qaid = 18 >>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[qaid]) >>> cm = cm_list[0] >>> cm.score_name_nsum(qreq_) >>> dpath = ut.get_app_resource_dir('wbia') >>> fpath = join(dpath, 'tmp_chipmatch.cPkl') >>> ut.delete(fpath) >>> cm.save_to_fpath(fpath) >>> cm2 = ChipMatch.load_from_fpath(fpath) >>> assert cm == cm2 >>> ut.quit_if_noshow() >>> cm.ishow_analysis(qreq_) >>> ut.show_if_requested()
- wbia.algo.hots.chip_match.get_chipmatch_fname(qaid, qreq_, qauuid=None, cfgstr=None, TRUNCATE_UUIDS=False, MAX_FNAME_LEN=200)[source]
- CommandLine:
python -m wbia.algo.hots.chip_match –test-get_chipmatch_fname
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.chip_match import * # NOQA >>> qreq_, args = plh.testdata_pre('spatial_verification', >>> defaultdb='PZ_MTEST', qaid_override=[18], >>> p='default:sqrd_dist_on=True') >>> cm_list = args.cm_list_FILT >>> cm = cm_list[0] >>> fname = get_chipmatch_fname(cm.qaid, qreq_, qauuid=None, >>> TRUNCATE_UUIDS=False, MAX_FNAME_LEN=200) >>> result = fname >>> print(result)
qaid=18_cm_cvgrsbnffsgifyom_quuid=a126d459-b730-573e-7a21-92894b016565.cPkl
- wbia.algo.hots.chip_match.prepare_dict_uuids(class_dict, ibs)[source]
Hacks to ensure proper uuid conversion
- wbia.algo.hots.chip_match.safe_check_lens_eq(arr1, arr2, msg=None)[source]
Check if it is safe to check if two arrays are equal
safe_check_lens_eq(None, 1) safe_check_lens_eq([3], [2, 4])
- wbia.algo.hots.chip_match.safe_check_nested_lens_eq(arr1, arr2)[source]
Check if it is safe to check if two arrays are equal (nested)
safe_check_nested_lens_eq(None, 1) safe_check_nested_lens_eq([[3, 4]], [[2, 4]]) safe_check_nested_lens_eq([[1, 2, 3], [1, 2]], [[1, 2, 3], [1, 2]]) safe_check_nested_lens_eq([[1, 2, 3], [1, 2]], [[1, 2, 3], [1]])
wbia.algo.hots.exceptions module
wbia.algo.hots.hstypes module
hstypes Todo: * SIFT: Root_SIFT -> L2 normalized -> Centering. # http://hal.archives-ouvertes.fr/docs/00/84/07/21/PDF/RR-8325.pdf The devil is in the deatails http://www.robots.ox.ac.uk/~vilem/bmvc2011.pdf This says dont clip, do rootsift instead # http://hal.archives-ouvertes.fr/docs/00/68/81/69/PDF/hal_v1.pdf * Quantization of residual vectors * Burstiness normalization for N-SMK * Implemented A-SMK * Incorporate Spatial Verification * Implement correct cfgstrs based on algorithm input for cached computations. * Color by word * Profile on hyrule * Train vocab on paris * Remove self matches. * New SIFT parameters for pyhesaff (root, powerlaw, meanwhatever, output_dtype)
Todo
This needs to be less constant when using non-sift descriptors
Issues: * 10GB are in use when performing query on Oxford 5K * errors when there is a word without any database vectors. currently a weight of zero is hacked in
- class wbia.algo.hots.hstypes.FiltKeys[source]
Bases:
object
- BARL2 = 'bar_l2'
- DIST = 'dist'
- DISTINCTIVENESS = 'distinctiveness'
- FG = 'fg'
- HOMOGERR = 'homogerr'
- LNBNN = 'lnbnn'
- RATIO = 'ratio'
- wbia.algo.hots.hstypes.PSEUDO_UINT8_MAX_SQRD = 262144.0
SeeAlso: vt.distance.understanding_pseudomax_props
wbia.algo.hots.match_chips4 module
Runs functions in pipeline to get query reuslts and does some caching.
- wbia.algo.hots.match_chips4.execute_query2(qreq_, verbose, save_qcache, batch_size=None, use_supercache=False)[source]
Breaks up query request into several subrequests to process “more efficiently” and safer as well.
- wbia.algo.hots.match_chips4.execute_query_and_save_L1(qreq_, use_cache, save_qcache, verbose=True, batch_size=None, use_supercache=False, invalidate_supercache=False)[source]
- Parameters
qreq (wbia.QueryRequest) –
use_cache (bool) –
- Returns
qaid2_cm
- CommandLine:
python -m wbia.algo.hots.match_chips4 execute_query_and_save_L1:0 python -m wbia.algo.hots.match_chips4 execute_query_and_save_L1:1 python -m wbia.algo.hots.match_chips4 execute_query_and_save_L1:2 python -m wbia.algo.hots.match_chips4 execute_query_and_save_L1:3
Example
>>> # SLOW_DOCTEST >>> # xdoctest: +SKIP >>> from wbia.algo.hots.match_chips4 import * # NOQA >>> cfgdict1 = dict(codename='vsmany', sv_on=True) >>> p = 'default' + ut.get_cfg_lbl(cfgdict1) >>> qreq_ = wbia.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4) >>> use_cache, save_qcache, verbose = False, False, True >>> qaid2_cm = execute_query_and_save_L1(qreq_, use_cache, save_qcache, verbose) >>> print(qaid2_cm)
Example
>>> # SLOW_DOCTEST >>> # xdoctest: +SKIP >>> from wbia.algo.hots.match_chips4 import * # NOQA >>> cfgdict1 = dict(codename='vsone', sv_on=True) >>> p = 'default' + ut.get_cfg_lbl(cfgdict1) >>> qreq_ = wbia.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4) >>> use_cache, save_qcache, verbose = False, False, True >>> qaid2_cm = execute_query_and_save_L1(qreq_, use_cache, save_qcache, verbose) >>> print(qaid2_cm)
Example
>>> # SLOW_DOCTEST >>> # xdoctest: +SKIP >>> # TEST SAVE >>> from wbia.algo.hots.match_chips4 import * # NOQA >>> import wbia >>> cfgdict1 = dict(codename='vsmany', sv_on=True) >>> p = 'default' + ut.get_cfg_lbl(cfgdict1) >>> qreq_ = wbia.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4) >>> use_cache, save_qcache, verbose = False, True, True >>> qaid2_cm = execute_query_and_save_L1(qreq_, use_cache, save_qcache, verbose) >>> print(qaid2_cm)
Example
>>> # SLOW_DOCTEST >>> # xdoctest: +SKIP >>> # TEST LOAD >>> from wbia.algo.hots.match_chips4 import * # NOQA >>> import wbia >>> cfgdict1 = dict(codename='vsmany', sv_on=True) >>> p = 'default' + ut.get_cfg_lbl(cfgdict1) >>> qreq_ = wbia.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, 4) >>> use_cache, save_qcache, verbose = True, True, True >>> qaid2_cm = execute_query_and_save_L1(qreq_, use_cache, save_qcache, verbose) >>> print(qaid2_cm)
Example
>>> # ENABLE_DOCTEST >>> # TEST PARTIAL HIT >>> from wbia.algo.hots.match_chips4 import * # NOQA >>> import wbia >>> cfgdict1 = dict(codename='vsmany', sv_on=False, prescore_method='csum') >>> p = 'default' + ut.get_cfg_lbl(cfgdict1) >>> qreq_ = wbia.main_helpers.testdata_qreq_(p=p, qaid_override=[1, 2, 3, >>> 4, 5, 6, >>> 7, 8, 9]) >>> use_cache, save_qcache, verbose = False, True, False >>> qaid2_cm = execute_query_and_save_L1(qreq_, use_cache, >>> save_qcache, verbose, >>> batch_size=3) >>> cm = qaid2_cm[1] >>> ut.delete(cm.get_fpath(qreq_)) >>> cm = qaid2_cm[4] >>> ut.delete(cm.get_fpath(qreq_)) >>> cm = qaid2_cm[5] >>> ut.delete(cm.get_fpath(qreq_)) >>> cm = qaid2_cm[6] >>> ut.delete(cm.get_fpath(qreq_)) >>> print('Re-execute') >>> qaid2_cm_ = execute_query_and_save_L1(qreq_, use_cache, >>> save_qcache, verbose, >>> batch_size=3) >>> assert all([qaid2_cm_[qaid] == qaid2_cm[qaid] for qaid in qreq_.qaids]) >>> [ut.delete(fpath) for fpath in qreq_.get_chipmatch_fpaths(qreq_.qaids)]
- Ignore:
other = cm_ = qaid2_cm_[qaid] cm = qaid2_cm[qaid]
- wbia.algo.hots.match_chips4.submit_query_request(qreq_, use_cache=None, use_bigcache=None, verbose=None, save_qcache=None, use_supercache=None, invalidate_supercache=None)[source]
Called from qreq_.execute
Checks a big cache for qaid2_cm. If cache miss, tries to load each cm individually. On an individual cache miss, it preforms the query.
- CommandLine:
python -m wbia.algo.hots.match_chips4 –test-submit_query_request
Example
>>> # SLOW_DOCTEST >>> # xdoctest: +SKIP >>> from wbia.algo.hots.match_chips4 import * # NOQA >>> import wbia >>> qaid_list = [1] >>> daid_list = [1, 2, 3, 4, 5] >>> use_bigcache = True >>> use_cache = True >>> ibs = wbia.opendb(db='testdb1') >>> qreq_ = ibs.new_query_request(qaid_list, daid_list, verbose=True) >>> cm_list = submit_query_request(qreq_=qreq_)
wbia.algo.hots.name_scoring module
- class wbia.algo.hots.name_scoring.NameScoreTup(sorted_nids, sorted_nscore, sorted_aids, sorted_scores)
Bases:
tuple
- sorted_aids
Alias for field number 2
- sorted_nids
Alias for field number 0
- sorted_nscore
Alias for field number 1
- sorted_scores
Alias for field number 3
- wbia.algo.hots.name_scoring.align_name_scores_with_annots(annot_score_list, annot_aid_list, daid2_idx, name_groupxs, name_score_list)[source]
takes name scores and gives them to the best annotation
- Returns
list of scores aligned with cm.daid_list and cm.dnid_list
- Return type
score_list
- Parameters
- CommandLine:
python -m wbia.algo.hots.name_scoring –test-align_name_scores_with_annots python -m wbia.algo.hots.name_scoring –test-align_name_scores_with_annots –show
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.name_scoring import * # NOQA >>> ibs, qreq_, cm_list = plh.testdata_post_sver('PZ_MTEST', qaid_list=[18]) >>> cm = cm_list[0] >>> cm.evaluate_csum_annot_score(qreq_) >>> cm.evaluate_nsum_name_score(qreq_) >>> # Annot aligned lists >>> annot_score_list = cm.algo_annot_scores['csum'] >>> annot_aid_list = cm.daid_list >>> daid2_idx = cm.daid2_idx >>> # Name aligned lists >>> name_score_list = cm.algo_name_scores['nsum'] >>> name_groupxs = cm.name_groupxs >>> # Execute Function >>> score_list = align_name_scores_with_annots(annot_score_list, annot_aid_list, daid2_idx, name_groupxs, name_score_list) >>> # Check that the correct name gets the highest score >>> target = name_score_list[cm.nid2_nidx[cm.qnid]] >>> test_index = np.where(score_list == target)[0][0] >>> cm.score_list = score_list >>> ut.assert_eq(ibs.get_annot_name_rowids(cm.daid_list[test_index]), cm.qnid) >>> assert ut.isunique(cm.dnid_list[score_list > 0]), 'bad name score' >>> top_idx = cm.algo_name_scores['nsum'].argmax() >>> assert cm.get_top_nids()[0] == cm.unique_nids[top_idx], 'bug in alignment' >>> ut.quit_if_noshow() >>> cm.show_ranked_matches(qreq_) >>> ut.show_if_requested()
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.name_scoring import * # NOQA >>> annot_score_list = [] >>> annot_aid_list = [] >>> daid2_idx = {} >>> # Name aligned lists >>> name_score_list = np.array([], dtype=np.float32) >>> name_groupxs = [] >>> # Execute Function >>> score_list = align_name_scores_with_annots(annot_score_list, annot_aid_list, daid2_idx, name_groupxs, name_score_list)
- wbia.algo.hots.name_scoring.compute_fmech_score(cm, qreq_=None, hack_single_ori=False)[source]
nsum. This is the fmech scoring mechanism.
- Parameters
cm (wbia.ChipMatch) –
- Returns
(unique_nids, nsum_score_list)
- Return type
- CommandLine:
python -m wbia.algo.hots.name_scoring –test-compute_fmech_score python -m wbia.algo.hots.name_scoring –test-compute_fmech_score:0 python -m wbia.algo.hots.name_scoring –test-compute_fmech_score:2 utprof.py -m wbia.algo.hots.name_scoring –test-compute_fmech_score:2 utprof.py -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 –db PZ_Master1 -a timectrl:qindex=0:256
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.name_scoring import * # NOQA >>> cm = testdata_chipmatch() >>> nsum_score_list = compute_fmech_score(cm) >>> assert np.all(nsum_score_list == [ 4., 7., 5.])
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.name_scoring import * # NOQA >>> ibs, qreq_, cm_list = plh.testdata_post_sver('PZ_MTEST', qaid_list=[18]) >>> cm = cm_list[0] >>> cm.evaluate_dnids(qreq_) >>> cm._cast_scores() >>> #cm.qnid = 1 # Hack for testdb1 names >>> nsum_score_list = compute_fmech_score(cm, qreq_) >>> #assert np.all(nsum_nid_list == cm.unique_nids), 'nids out of alignment' >>> flags = (cm.unique_nids == cm.qnid) >>> max_true = nsum_score_list[flags].max() >>> max_false = nsum_score_list[~flags].max() >>> assert max_true > max_false, 'is this truely a hard case?' >>> assert max_true > 1.2, 'score=%r should be higher for aid=18' % (max_true,)
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.name_scoring import * # NOQA >>> ibs, qreq_, cm_list = plh.testdata_post_sver('PZ_MTEST', qaid_list=[18], cfgdict=dict(query_rotation_heuristic=True)) >>> cm = cm_list[0] >>> cm.score_name_nsum(qreq_) >>> ut.quit_if_noshow() >>> cm.show_ranked_matches(qreq_, ori=True)
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.name_scoring import * # NOQA >>> #ibs, qreq_, cm_list = plh.testdata_pre_sver('testdb1', qaid_list=[1]) >>> ibs, qreq_, cm_list = plh.testdata_post_sver('testdb1', qaid_list=[1], cfgdict=dict(query_rotation_heuristic=True)) >>> cm = cm_list[0] >>> cm.score_name_nsum(qreq_) >>> ut.quit_if_noshow() >>> cm.show_ranked_matches(qreq_, ori=True)
- wbia.algo.hots.name_scoring.get_chipmatch_namescore_nonvoting_feature_flags(cm, qreq_=None)[source]
DEPRICATE
Computes flags to desribe which features can or can not vote
- CommandLine:
python -m wbia.algo.hots.name_scoring –exec-get_chipmatch_namescore_nonvoting_feature_flags
Example
>>> # ENABLE_DOCTEST >>> # FIXME: breaks when fg_on=True >>> from wbia.algo.hots.name_scoring import * # NOQA >>> from wbia.algo.hots import name_scoring >>> # Test to make sure name score and chips score are equal when per_name=1 >>> qreq_, args = plh.testdata_pre('spatial_verification', defaultdb='PZ_MTEST', a=['default:dpername=1,qsize=1,dsize=10'], p=['default:K=1,fg_on=True']) >>> cm_list = args.cm_list_FILT >>> ibs = qreq_.ibs >>> cm = cm_list[0] >>> cm.evaluate_dnids(qreq_) >>> featflat_list = get_chipmatch_namescore_nonvoting_feature_flags(cm, qreq_) >>> assert all(list(map(np.all, featflat_list))), 'all features should be able to vote in K=1, per_name=1 case'
wbia.algo.hots.neighbor_index module
Todo
Remove Bloat
multi_index.py as well
https://github.com/spotify/annoy
- class wbia.algo.hots.neighbor_index.NeighborIndex(flann_params, cfgstr)[source]
Bases:
object
wrapper class around flann stores flann index and data it needs to index into
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> nnindexer, qreq_, ibs = testdata_nnindexer()
- add_support(new_daid_list, new_vecs_list, new_fgws_list, new_fxs_list, verbose=True)[source]
adds support data (aka data to be indexed)
- Parameters
- CommandLine:
python -m wbia.algo.hots.neighbor_index –test-add_support
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> nnindexer, qreq_, ibs = testdata_nnindexer(use_memcache=False) >>> new_daid_list = [2, 3, 4] >>> K = 2 >>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2()) >>> # get before data >>> (qfx2_idx1, qfx2_dist1) = nnindexer.knn(qfx2_vec, K) >>> new_vecs_list, new_fgws_list, new_fxs_list = get_support_data(qreq_, new_daid_list) >>> # execute test function >>> nnindexer.add_support(new_daid_list, new_vecs_list, new_fgws_list, new_fxs_list) >>> # test before data vs after data >>> (qfx2_idx2, qfx2_dist2) = nnindexer.knn(qfx2_vec, K) >>> assert qfx2_idx2.max() > qfx2_idx1.max()
- add_wbia_support(qreq_, new_daid_list, verbose=True)[source]
# TODO: ensure that the memcache changes appropriately
- batch_knn(vecs, K, chunksize=4096, label='batch knn')[source]
Works like indexer.knn but the input is split into batches and progress is reported to give an esimated time remaining.
- ensure_indexer(cachedir, verbose=True, force_rebuild=False, memtrack=None, prog_hook=None)[source]
Ensures that you get a neighbor indexer. It either loads a chached indexer or rebuilds a new one.
- ext = '.flann'
- get_cfgstr(noquery=False)[source]
returns string which uniquely identified configuration and support data
- Parameters
noquery (bool) – if True cfgstr is only relevant to building the index. No search params are returned (default = False)
- Returns
flann_cfgstr
- Return type
- CommandLine:
python -m wbia.algo.hots.neighbor_index –test-get_cfgstr
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> import wbia >>> cfgdict = dict(fg_on=False) >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', p='default:fg_on=False') >>> qreq_.load_indexer() >>> nnindexer = qreq_.indexer >>> noquery = True >>> flann_cfgstr = nnindexer.get_cfgstr(noquery) >>> result = ('flann_cfgstr = %s' % (str(flann_cfgstr),)) >>> print(result) flann_cfgstr = _FLANN((algo=kdtree,seed=42,t=8,))_VECS((11260,128)gj5nea@ni0%f3aja)
- get_nn_aids(qfx2_nnidx)[source]
- Parameters
qfx2_nnidx – (N x K) qfx2_idx[n][k] is the index of the kth approximate nearest data vector
- Returns
- (N x K) qfx2_fx[n][k] is the annotation id index of the
kth approximate nearest data vector
- Return type
qfx2_aid
- CommandLine:
python -m wbia.algo.hots.neighbor_index –exec-get_nn_aids
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> import wbia >>> cfgdict = dict(fg_on=False) >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', p='default:fg_on=False,dim_size=450,resize_dim=area') >>> qreq_.load_indexer() >>> nnindexer = qreq_.indexer >>> qfx2_vec = qreq_.ibs.get_annot_vecs( >>> qreq_.get_internal_qaids()[0], >>> config2_=qreq_.get_internal_query_config2()) >>> num_neighbors = 4 >>> (qfx2_nnidx, qfx2_dist) = nnindexer.knn(qfx2_vec, num_neighbors) >>> qfx2_aid = nnindexer.get_nn_aids(qfx2_nnidx) >>> assert qfx2_aid.shape[1] == num_neighbors >>> print('qfx2_aid.shape = %r' % (qfx2_aid.shape,)) >>> assert qfx2_aid.shape[1] == 4 >>> ut.assert_inbounds(qfx2_aid.shape[0], 1200, 1300)
- get_nn_featxs(qfx2_nnidx)[source]
- Parameters
qfx2_nnidx – (N x K) qfx2_idx[n][k] is the index of the kth approximate nearest data vector
- Returns
- (N x K) qfx2_fx[n][k] is the feature index (w.r.t the
source annotation) of the kth approximate nearest data vector
- Return type
qfx2_fx
- get_nn_fgws(qfx2_nnidx)[source]
Gets forground weights of neighbors
- CommandLine:
python -m wbia –tf NeighborIndex.get_nn_fgws
- Parameters
qfx2_nnidx – (N x K) qfx2_idx[n][k] is the index of the kth approximate nearest data vector
- Returns
- (N x K) qfx2_fgw[n][k] is the annotation id index of the
kth forground weight
- Return type
qfx2_fgw
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> nnindexer, qreq_, ibs = testdata_nnindexer(dbname='testdb1') >>> qfx2_nnidx = np.array([[0, 1, 2], [3, 4, 5]]) >>> qfx2_fgw = nnindexer.get_nn_fgws(qfx2_nnidx)
- get_removed_idxs()[source]
__removed_ids = nnindexer.flann._FLANN__removed_ids invalid_idxs = nnindexer.get_removed_idxs() assert len(np.intersect1d(invalid_idxs, __removed_ids)) == len(__removed_ids)
- init_support(aid_list, vecs_list, fgws_list, fxs_list, verbose=True)[source]
prepares inverted indicies and FLANN data structure
flattens vecs_list and builds a reverse index from the flattened indices (idx) to the original aids and fxs
- knn(qfx2_vec, K)[source]
Returns the indices and squared distance to the nearest K neighbors. The distance is noramlized between zero and one using VEC_PSEUDO_MAX_DISTANCE = (np.sqrt(2) * VEC_PSEUDO_MAX)
- Parameters
qfx2_vec – (N x D) an array of N, D-dimensional query vectors
K – number of approximate nearest neighbors to find
- Returns: tuple of (qfx2_idx, qfx2_dist)
- ndarrayqfx2_idx[n][k] (N x K) is the index of the kth
approximate nearest data vector w.r.t qfx2_vec[n]
- ndarrayqfx2_dist[n][k] (N x K) is the distance to the kth
approximate nearest data vector w.r.t. qfx2_vec[n] distance is normalized squared euclidean distance.
- CommandLine:
python -m wbia –tf NeighborIndex.knn:0 –debug2 python -m wbia –tf NeighborIndex.knn:1
Example
>>> # FIXME failing-test (22-Jul-2020) This test is failing and it's not clear how to fix it >>> # xdoctest: +SKIP >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> indexer, qreq_, ibs = testdata_nnindexer() >>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2()) >>> K = 2 >>> indexer.debug_nnindexer() >>> assert vt.check_sift_validity(qfx2_vec), 'bad SIFT properties' >>> (qfx2_idx, qfx2_dist) = indexer.knn(qfx2_vec, K) >>> result = str(qfx2_idx.shape) + ' ' + str(qfx2_dist.shape) >>> print('qfx2_vec.dtype = %r' % (qfx2_vec.dtype,)) >>> print('indexer.max_distance_sqrd = %r' % (indexer.max_distance_sqrd,)) >>> assert np.all(qfx2_dist < 1.0), ( >>> 'distance should be less than 1. got %r' % (qfx2_dist,)) >>> # Ensure distance calculations are correct >>> qfx2_dvec = indexer.idx2_vec[qfx2_idx.T] >>> targetdist = vt.L2_sift(qfx2_vec, qfx2_dvec).T ** 2 >>> rawdist = vt.L2_sqrd(qfx2_vec, qfx2_dvec).T >>> assert np.all(qfx2_dist * indexer.max_distance_sqrd == rawdist), ( >>> 'inconsistant distance calculations') >>> assert np.allclose(targetdist, qfx2_dist), ( >>> 'inconsistant distance calculations')
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> indexer, qreq_, ibs = testdata_nnindexer() >>> qfx2_vec = np.empty((0, 128), dtype=indexer.get_dtype()) >>> K = 2 >>> (qfx2_idx, qfx2_dist) = indexer.knn(qfx2_vec, K) >>> result = str(qfx2_idx.shape) + ' ' + str(qfx2_dist.shape) >>> print(result) (0, 2) (0, 2)
- load(cachedir=None, fpath=None, verbose=True)[source]
Loads a cached flann neighbor indexer from disk (not the data)
- prefix1 = 'flann'
- remove_support(remove_daid_list, verbose=True)[source]
- CommandLine:
python -m wbia.algo.hots.neighbor_index –test-remove_support
- SeeAlso:
~/code/flann/src/python/pyflann/index.py
Example
>>> # SLOW_DOCTEST >>> # xdoctest: +SKIP >>> # (IMPORTANT) >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> nnindexer, qreq_, ibs = testdata_nnindexer(use_memcache=False) >>> remove_daid_list = [8, 9, 10, 11] >>> K = 2 >>> qfx2_vec = ibs.get_annot_vecs(1, config2_=qreq_.get_internal_query_config2()) >>> # get before data >>> (qfx2_idx1, qfx2_dist1) = nnindexer.knn(qfx2_vec, K) >>> # execute test function >>> nnindexer.remove_support(remove_daid_list) >>> # test before data vs after data >>> (qfx2_idx2, qfx2_dist2) = nnindexer.knn(qfx2_vec, K) >>> ax2_nvecs = ut.dict_take(ut.dict_hist(nnindexer.idx2_ax), range(len(nnindexer.ax2_aid))) >>> assert qfx2_idx2.max() < ax2_nvecs[0], 'should only get points from aid 7' >>> assert qfx2_idx1.max() > ax2_nvecs[0], 'should get points from everyone'
- remove_wbia_support(qreq_, remove_daid_list, verbose=True)[source]
# TODO: ensure that the memcache changes appropriately
- requery_knn(qfx2_vec, K, pad, impossible_aids, recover=True)[source]
hack for iccv - this is a highly coupled function
- CommandLine:
python -m wbia.algo.hots.neighbor_index requery_knn
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', a='default') >>> qreq_.load_indexer() >>> indexer = qreq_.indexer >>> qannot = qreq_.internal_qannots[1] >>> qfx2_vec = qannot.vecs >>> K = 3 >>> pad = 1 >>> ibs = qreq_.ibs >>> qaid = qannot.aid >>> impossible_aids = ibs.get_annot_groundtruth(qaid, noself=False) >>> impossible_aids = np.array([1, 2, 3, 4, 5]) >>> qfx2_idx, qfx2_dist = indexer.requery_knn(qfx2_vec, K, pad, >>> impossible_aids) >>> #indexer.get_nn_axs(qfx2_idx) >>> assert np.all(np.diff(qfx2_dist, axis=1) >= 0)
- rrr(verbose=True, reload_module=True)
special class reloading function This function is often injected as rrr of classes
- class wbia.algo.hots.neighbor_index.NeighborIndex2(flann_params=None, cfgstr=None)[source]
Bases:
wbia.algo.hots.neighbor_index.NeighborIndex
,utool.util_dev.NiceRepr
- rrr(verbose=True, reload_module=True)
special class reloading function This function is often injected as rrr of classes
- wbia.algo.hots.neighbor_index.get_support_data(qreq_, daid_list)[source]
- CommandLine:
python -m wbia.algo.hots.neighbor_index get_support_data –show
Example
>>> # xdoctest: +REQUIRES(module:wbia_cnn) >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='PZ_MTEST', p=':fgw_thresh=.9,maxscale_thresh=10', a=':size=2') >>> daid_list = qreq_.daids >>> tup = get_support_data(qreq_, daid_list) >>> vecs_list, fgws_list, fxs_list = tup >>> assert all([np.all(fgws > .9) for fgws in fgws_list]) >>> result = ('depth_profile = %r' % (ut.depth_profile(tup),)) >>> print(result)
depth_profile = [[(128, 128), (174, 128)], [128, 174], [128, 174]]
I can’t figure out why this tests isn’t determenistic all the time and I can’t get it to reproduce non-determenism.
This could be due to theano.
depth_profile = [[(39, 128), (22, 128)], [39, 22], [39, 22]] depth_profile = [[(35, 128), (24, 128)], [35, 24], [35, 24]] depth_profile = [[(34, 128), (31, 128)], [34, 31], [34, 31]] depth_profile = [[(83, 128), (129, 128)], [83, 129], [83, 129]] depth_profile = [[(13, 128), (104, 128)], [13, 104], [13, 104]]
- wbia.algo.hots.neighbor_index.invert_index(vecs_list, fgws_list, ax_list, fxs_list, verbose=True)[source]
Aggregates descriptors of input annotations and returns inverted information
- Parameters
- Returns
(idx2_vec, idx2_fgw, idx2_ax, idx2_fx)
- Return type
- CommandLine:
python -m wbia.algo.hots.neighbor_index invert_index
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> rng = np.random.RandomState(42) >>> DIM_SIZE = 16 >>> nFeat_list = [3, 0, 4, 1] >>> vecs_list = [rng.randn(nFeat, DIM_SIZE) for nFeat in nFeat_list] >>> fgws_list = [rng.randn(nFeat) for nFeat in nFeat_list] >>> fxs_list = [np.arange(nFeat) for nFeat in nFeat_list] >>> ax_list = np.arange(len(vecs_list)) >>> fgws_list = None >>> verbose = True >>> tup = invert_index(vecs_list, fgws_list, ax_list, fxs_list) >>> (idx2_vec, idx2_fgw, idx2_ax, idx2_fx) = tup >>> result = 'output depth_profile = %s' % (ut.depth_profile(tup),) >>> print(result) output depth_profile = [(8, 16), 1, 8, 8]
Example
>>> # xdoctest: +REQUIRES(--slow) >>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', a='default:species=zebra_plains', p='default:fgw_thresh=.999') >>> vecs_list, fgws_list, fxs_list = get_support_data(qreq_, qreq_.daids) >>> ax_list = np.arange(len(vecs_list)) >>> input_ = vecs_list, fgws_list, ax_list, fxs_list >>> print('input depth_profile = %s' % (ut.depth_profile(input_),)) >>> tup = invert_index(*input_) >>> (idx2_vec, idx2_fgw, idx2_ax, idx2_fx) = tup >>> result = 'output depth_profile = %s' % (ut.depth_profile(tup),) >>> print(result)
output depth_profile = [(1912, 128), 1912, 1912, 1912]
wbia.algo.hots.neighbor_index_cache module
NEEDS CLEANUP
- class wbia.algo.hots.neighbor_index_cache.UUIDMapHyrbridCache[source]
Bases:
object
Class that lets multiple ways of writing to the uuid_map be swapped in and out interchangably
TODO: the global read / write should periodically sync itself to disk and it should be loaded from disk initially
- read_uuid_map_dict(uuid_map_fpath, min_reindex_thresh)[source]
uses in memory dictionary instead of disk
- write_uuid_map_dict(uuid_map_fpath, visual_uuid_list, daids_hashid)[source]
uses in memory dictionary instead of disk
let the multi-indexer know about any big caches we’ve made multi-indexer. Also lets nnindexer know about other prebuilt indexers so it can attempt to just add points to them as to avoid a rebuild.
- wbia.algo.hots.neighbor_index_cache.background_flann_func(cachedir, daid_list, vecs_list, fgws_list, fxs_list, flann_params, cfgstr, uuid_map_fpath, daids_hashid, visual_uuid_list, min_reindex_thresh)[source]
FIXME: Duplicate code
- wbia.algo.hots.neighbor_index_cache.build_nnindex_cfgstr(qreq_, daid_list)[source]
builds a string that uniquely identified an indexer built with parameters from the input query requested and indexing descriptor from the input annotation ids
- Parameters
qreq (QueryRequest) – query request object with hyper-parameters
daid_list (list) –
- Returns
nnindex_cfgstr
- Return type
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-build_nnindex_cfgstr
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> ibs = wbia.opendb(db='testdb1') >>> daid_list = ibs.get_valid_aids(species=wbia.const.TEST_SPECIES.ZEB_PLAIN) >>> qreq_ = ibs.new_query_request(daid_list, daid_list, cfgdict=dict(fg_on=False)) >>> nnindex_cfgstr = build_nnindex_cfgstr(qreq_, daid_list) >>> result = str(nnindex_cfgstr) >>> print(result)
_VUUIDS((6)ylydksaqdigdecdd)_FLANN(8_kdtrees)_FeatureWeight(detector=cnn,sz256,thresh=20,ksz=20,enabled=False)_FeatureWeight(detector=cnn,sz256,thresh=20,ksz=20,enabled=False)
_VUUIDS((6)ylydksaqdigdecdd)_FLANN(8_kdtrees)_FEATWEIGHT(OFF)_FEAT(hesaff+sift_)_CHIP(sz450)
- wbia.algo.hots.neighbor_index_cache.check_background_process()[source]
checks to see if the process has finished and then writes the uuid map to disk
- wbia.algo.hots.neighbor_index_cache.clear_uuid_cache(qreq_)[source]
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-clear_uuid_cache
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', p='default:fg_on=True') >>> fgws_list = clear_uuid_cache(qreq_) >>> result = str(fgws_list) >>> print(result)
- wbia.algo.hots.neighbor_index_cache.get_nnindexer_uuid_map_fpath(qreq_)[source]
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache get_nnindexer_uuid_map_fpath
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', p='default:fgw_thresh=.3') >>> uuid_map_fpath = get_nnindexer_uuid_map_fpath(qreq_) >>> result = str(ut.path_ndir_split(uuid_map_fpath, 3)) >>> print(result)
…/_wbia_cache/flann/uuid_map_mzwwsbjisbkdxorl.cPkl …/_wbia_cache/flann/uuid_map_FLANN(8_kdtrees_fgwthrsh=0.3)_Feat(hesaff+sift)_Chip(sz700,width).cPkl …/_wbia_cache/flann/uuid_map_FLANN(8_kdtrees)_Feat(hesaff+sift)_Chip(sz700,width).cPkl …/_wbia_cache/flann/uuid_map_FLANN(8_kdtrees)_FEAT(hesaff+sift_)_CHIP(sz450).cPkl
- wbia.algo.hots.neighbor_index_cache.group_daids_by_cached_nnindexer(qreq_, daid_list, min_reindex_thresh, max_covers=None)[source]
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-group_daids_by_cached_nnindexer
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> ibs = wbia.opendb('testdb1') >>> ZEB_PLAIN = wbia.const.TEST_SPECIES.ZEB_PLAIN >>> daid_list = ibs.get_valid_aids(species=ZEB_PLAIN) >>> qreq_ = ibs.new_query_request(daid_list, daid_list) >>> # Set the params a bit lower >>> max_covers = None >>> qreq_.qparams.min_reindex_thresh = 1 >>> min_reindex_thresh = qreq_.qparams.min_reindex_thresh >>> # STEP 0: CLEAR THE CACHE >>> clear_uuid_cache(qreq_) >>> # STEP 1: ASSERT EMPTY INDEX >>> daid_list = sorted(ibs.get_valid_aids(species=ZEB_PLAIN))[0:3] >>> uncovered_aids, covered_aids_list = group_daids_by_cached_nnindexer( ... qreq_, daid_list, min_reindex_thresh, max_covers) >>> result1 = uncovered_aids, covered_aids_list >>> ut.assert_eq(result1, ([], [[1, 2, 3]]), 'pre request') >>> # TEST 2: SHOULD MAKE 123 COVERED >>> nnindexer = request_memcached_wbia_nnindexer(qreq_, daid_list) >>> uncovered_aids, covered_aids_list = group_daids_by_cached_nnindexer( ... qreq_, daid_list, min_reindex_thresh, max_covers) >>> result2 = uncovered_aids, covered_aids_list >>> ut.assert_eq(result2, ([], [[1, 2, 3]]), 'post request')
- wbia.algo.hots.neighbor_index_cache.new_neighbor_index(daid_list, vecs_list, fgws_list, fxs_list, flann_params, cachedir, cfgstr, force_rebuild=False, verbose=True, memtrack=None, prog_hook=None)[source]
constructs neighbor index independent of wbia
- Parameters
- Returns
nnindexer
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-new_neighbor_index
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', a='default:species=zebra_plains', p='default:fgw_thresh=.999') >>> daid_list = qreq_.daids >>> nnindex_cfgstr = build_nnindex_cfgstr(qreq_, daid_list) >>> ut.exec_funckw(new_neighbor_index, globals()) >>> cfgstr = nnindex_cfgstr >>> cachedir = qreq_.ibs.get_flann_cachedir() >>> flann_params = qreq_.qparams.flann_params >>> # Get annot descriptors to index >>> vecs_list, fgws_list, fxs_list = get_support_data(qreq_, daid_list) >>> nnindexer = new_neighbor_index(daid_list, vecs_list, fgws_list, fxs_list, flann_params, cachedir, cfgstr, verbose=True) >>> result = ('nnindexer.ax2_aid = %s' % (str(nnindexer.ax2_aid),)) >>> print(result) nnindexer.ax2_aid = [1 2 3 4 5 6]
- wbia.algo.hots.neighbor_index_cache.print_uuid_cache(qreq_)[source]
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-print_uuid_cache
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='PZ_Master0', p='default:fg_on=False') >>> print_uuid_cache(qreq_) >>> result = str(nnindexer) >>> print(result)
- wbia.algo.hots.neighbor_index_cache.request_augmented_wbia_nnindexer(qreq_, daid_list, verbose=True, use_memcache=True, force_rebuild=False, memtrack=None)[source]
DO NOT USE. THIS FUNCTION CAN CURRENTLY CAUSE A SEGFAULT
tries to give you an indexer for the requested daids using the least amount of computation possible. By loading and adding to a partially build nnindex if possible and if that fails fallbs back to request_memcache.
- Parameters
qreq (QueryRequest) – query request object with hyper-parameters
daid_list (list) –
- Returns
nnindex_cfgstr
- Return type
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-request_augmented_wbia_nnindexer
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> # build test data >>> ZEB_PLAIN = wbia.const.TEST_SPECIES.ZEB_PLAIN >>> ibs = wbia.opendb('testdb1') >>> use_memcache, max_covers, verbose = True, None, True >>> daid_list = sorted(ibs.get_valid_aids(species=ZEB_PLAIN))[0:6] >>> qreq_ = ibs.new_query_request(daid_list, daid_list) >>> qreq_.qparams.min_reindex_thresh = 1 >>> min_reindex_thresh = qreq_.qparams.min_reindex_thresh >>> # CLEAR CACHE for clean test >>> clear_uuid_cache(qreq_) >>> # LOAD 3 AIDS INTO CACHE >>> aid_list = sorted(ibs.get_valid_aids(species=ZEB_PLAIN))[0:3] >>> # Should fallback >>> nnindexer = request_augmented_wbia_nnindexer(qreq_, aid_list) >>> # assert the fallback >>> uncovered_aids, covered_aids_list = group_daids_by_cached_nnindexer( ... qreq_, daid_list, min_reindex_thresh, max_covers) >>> result2 = uncovered_aids, covered_aids_list >>> ut.assert_eq(result2, ([4, 5, 6], [[1, 2, 3]]), 'pre augment') >>> # Should augment >>> nnindexer = request_augmented_wbia_nnindexer(qreq_, daid_list) >>> uncovered_aids, covered_aids_list = group_daids_by_cached_nnindexer( ... qreq_, daid_list, min_reindex_thresh, max_covers) >>> result3 = uncovered_aids, covered_aids_list >>> ut.assert_eq(result3, ([], [[1, 2, 3, 4, 5, 6]]), 'post augment') >>> # Should fallback >>> nnindexer2 = request_augmented_wbia_nnindexer(qreq_, daid_list) >>> assert nnindexer is nnindexer2
- wbia.algo.hots.neighbor_index_cache.request_background_nnindexer(qreq_, daid_list)[source]
FIXME: Duplicate code
- Parameters
qreq (QueryRequest) – query request object with hyper-parameters
daid_list (list) –
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-request_background_nnindexer
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> # build test data >>> ibs = wbia.opendb('testdb1') >>> daid_list = ibs.get_valid_aids(species=wbia.const.TEST_SPECIES.ZEB_PLAIN) >>> qreq_ = ibs.new_query_request(daid_list, daid_list) >>> # execute function >>> request_background_nnindexer(qreq_, daid_list) >>> # verify results >>> result = str(False) >>> print(result)
- wbia.algo.hots.neighbor_index_cache.request_diskcached_wbia_nnindexer(qreq_, daid_list, nnindex_cfgstr=None, verbose=True, force_rebuild=False, memtrack=None, prog_hook=None)[source]
builds new NeighborIndexer which will try to use a disk cached flann if available
- Parameters
qreq (QueryRequest) – query request object with hyper-parameters
daid_list (list) –
nnindex_cfgstr –
verbose (bool) –
- Returns
nnindexer
- Return type
NeighborIndexer
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-request_diskcached_wbia_nnindexer
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> # build test data >>> ibs = wbia.opendb('testdb1') >>> daid_list = ibs.get_valid_aids(species=wbia.const.TEST_SPECIES.ZEB_PLAIN) >>> qreq_ = ibs.new_query_request(daid_list, daid_list) >>> nnindex_cfgstr = build_nnindex_cfgstr(qreq_, daid_list) >>> verbose = True >>> # execute function >>> nnindexer = request_diskcached_wbia_nnindexer(qreq_, daid_list, nnindex_cfgstr, verbose) >>> # verify results >>> result = str(nnindexer) >>> print(result)
- wbia.algo.hots.neighbor_index_cache.request_memcached_wbia_nnindexer(qreq_, daid_list, use_memcache=True, verbose=True, veryverbose=False, force_rebuild=False, memtrack=None, prog_hook=None)[source]
FOR INTERNAL USE ONLY takes custom daid list. might not be the same as what is in qreq_
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache –test-request_memcached_wbia_nnindexer
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> import wbia >>> # build test data >>> ibs = wbia.opendb('testdb1') >>> qreq_.qparams.min_reindex_thresh = 3 >>> ZEB_PLAIN = wbia.const.TEST_SPECIES.ZEB_PLAIN >>> daid_list = ibs.get_valid_aids(species=ZEB_PLAIN)[0:3] >>> qreq_ = ibs.new_query_request(daid_list, daid_list) >>> verbose = True >>> use_memcache = True >>> # execute function >>> nnindexer = request_memcached_wbia_nnindexer(qreq_, daid_list, use_memcache) >>> # verify results >>> result = str(nnindexer) >>> print(result)
- wbia.algo.hots.neighbor_index_cache.request_wbia_nnindexer(qreq_, verbose=True, **kwargs)[source]
CALLED BY QUERYREQUST::LOAD_INDEXER IBEIS interface into neighbor_index_cache
- Parameters
qreq (QueryRequest) – hyper-parameters
- Returns
nnindexer
- Return type
NeighborIndexer
- CommandLine:
python -m wbia.algo.hots.neighbor_index_cache request_wbia_nnindexer
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> nnindexer, qreq_, ibs = testdata_nnindexer(None) >>> nnindexer = request_wbia_nnindexer(qreq_)
- wbia.algo.hots.neighbor_index_cache.testdata_nnindexer(dbname='testdb1', with_indexer=True, use_memcache=True)[source]
- Ignore:
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> nnindexer, qreq_, ibs = testdata_nnindexer('PZ_Master1') >>> S = np.cov(nnindexer.idx2_vec.T) >>> import wbia.plottool as pt >>> pt.ensureqt() >>> pt.plt.imshow(S)
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index_cache import * # NOQA >>> nnindexer, qreq_, ibs = testdata_nnindexer()
wbia.algo.hots.nn_weights module
- wbia.algo.hots.nn_weights.all_normalized_weights_test()[source]
- CommandLine:
python -m wbia.algo.hots.nn_weights –exec-all_normalized_weights_test
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> all_normalized_weights_test()
- wbia.algo.hots.nn_weights.apply_normweight(normweight_fn, neighb_normk, neighb_idx, neighb_dist, Knorm)[source]
helper applies the normalized weight function to one query annotation
- Parameters
normweight_fn (func) – chosen weight function e.g. lnbnn
qaid (int) – query annotation id
neighb_idx (ndarray[int32_t, ndims=2]) – mapping from query feature index to db neighbor index
neighb_dist (ndarray) – mapping from query feature index to dist
Knorm (int) –
qreq (QueryRequest) – query request object with hyper-parameters
- Returns
neighb_normweight
- Return type
ndarray
- CommandLine:
python -m wbia.algo.hots.nn_weights –test-apply_normweight
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> from wbia.algo.hots import nn_weights >>> #cfgdict = {'K':10, 'Knorm': 10, 'normalizer_rule': 'name', >>> # 'dim_size': 450, 'resize_dim': 'area'} >>> #tup = plh.testdata_pre_weight_neighbors(cfgdict=cfgdict) >>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='testdb1', >>> p=['default:K=10,Knorm=10,normalizer_rule=name,dim_size=450,resize_dim=area']) >>> nns_list, nnvalid0_list = args >>> qaid = qreq_.qaids[0] >>> Knorm = qreq_.qparams.Knorm >>> normweight_fn = lnbnn_fn >>> normalizer_rule = qreq_.qparams.normalizer_rule >>> (neighb_idx, neighb_dist) = nns_list[0] >>> neighb_normk = get_normk(qreq_, qaid, neighb_idx, Knorm, normalizer_rule) >>> neighb_normweight = nn_weights.apply_normweight( >>> normweight_fn, neighb_normk, neighb_idx, neighb_dist, Knorm) >>> ut.assert_inbounds(neighb_normweight.sum(), 600, 950)
- wbia.algo.hots.nn_weights.bar_l2_fn(vdist, ndist)[source]
The feature weight is (1 - the euclidian distance between the features). The normalizers are unused.
(not really a normaalized function)
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> vdist, ndist = testdata_vn_dists() >>> out = bar_l2_fn(vdist, ndist) >>> result = ut.hz_str('barl2 = ', ut.repr2(out, precision=2)) >>> print(result) barl2 = np.array([[1. , 0.6 , 0.41], [0.83, 0.7 , 0.49], [0.87, 0.58, 0.27], [0.88, 0.63, 0.46], [0.82, 0.53, 0.5 ]])
- wbia.algo.hots.nn_weights.const_match_weighter(nns_list, nnvalid0_list, qreq_)[source]
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> #tup = plh.testdata_pre_weight_neighbors('PZ_MTEST') >>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='PZ_MTEST') >>> nns_list, nnvalid0_list = args >>> ibs, qreq_, nns_list, nnvalid0_list = tup >>> constvote_weight_list = const_match_weighter(nns_list, nnvalid0_list, qreq_) >>> result = ('constvote_weight_list = %s' % (str(constvote_weight_list),)) >>> print(result)
- wbia.algo.hots.nn_weights.fg_match_weighter(nns_list, nnvalid0_list, qreq_)[source]
foreground feature match weighting
- CommandLine:
python -m wbia.algo.hots.nn_weights –exec-fg_match_weighter
Example
>>> # xdoctest: +REQUIRES(module:wbia_cnn) >>> from wbia.algo.hots.nn_weights import * # NOQA >>> #tup = plh.testdata_pre_weight_neighbors('PZ_MTEST') >>> #ibs, qreq_, nns_list, nnvalid0_list = tup >>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='PZ_MTEST') >>> nns_list, nnvalid0_list = args >>> print(ut.repr2(qreq_.qparams.__dict__, sorted_=True)) >>> assert qreq_.qparams.fg_on == True, 'bug setting custom params fg_on' >>> fgvotes_list = fg_match_weighter(nns_list, nnvalid0_list, qreq_) >>> print('fgvotes_list = %r' % (fgvotes_list,))
- wbia.algo.hots.nn_weights.get_name_normalizers(qaid, qreq_, Knorm, neighb_idx)[source]
helper normalizers for ‘name’ normalizer_rule
- Parameters
- Returns
neighb_normk
- Return type
ndarray
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> from wbia.algo.hots import nn_weights >>> #cfgdict = {'K':10, 'Knorm': 10, 'normalizer_rule': 'name'} >>> #tup = plh.testdata_pre_weight_neighbors(cfgdict=cfgdict) >>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='testdb1', >>> p=['default:K=10,Knorm=10,normalizer_rule=name']) >>> nns_list, nnvalid0_list = args >>> Knorm = qreq_.qparams.Knorm >>> (neighb_idx, neighb_dist) = nns_list[0] >>> qaid = qreq_.qaids[0] >>> neighb_normk = get_name_normalizers(qaid, qreq_, Knorm, neighb_idx)
- wbia.algo.hots.nn_weights.get_normk(qreq_, qaid, neighb_idx, Knorm, normalizer_rule)[source]
Get positions of the LNBNN/ratio tests normalizers
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> cfgdict = {'K':10, 'Knorm': 10, 'normalizer_rule': 'name', >>> 'dim_size': 450, 'resize_dim': 'area'} >>> #tup = plh.testdata_pre_weight_neighbors(cfgdict=cfgdict) >>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='testdb1', >>> p=['default:K=10,Knorm=10,normalizer_rule=name,dim_size=450,resize_dim=area']) >>> nns_list, nnvalid0_list = args >>> (neighb_idx, neighb_dist) = nns_list[0] >>> qaid = qreq_.qaids[0] >>> K = qreq_.qparams.K >>> Knorm = qreq_.qparams.Knorm >>> neighb_normk1 = get_normk(qreq_, qaid, neighb_idx, Knorm, 'last') >>> neighb_normk2 = get_normk(qreq_, qaid, neighb_idx, Knorm, 'name') >>> assert np.all(neighb_normk1 == Knorm + K) >>> assert np.all(neighb_normk2 <= Knorm + K) and np.all(neighb_normk2 > K)
- wbia.algo.hots.nn_weights.lnbnn_fn(vdist, ndist)[source]
Locale Naive Bayes Nearest Neighbor weighting
References
http://www.cs.ubc.ca/~lowe/papers/12mccannCVPR.pdf http://www.cs.ubc.ca/~sanchom/local-naive-bayes-nearest-neighbor
- Sympy:
>>> import sympy >>> #https://github.com/sympy/sympy/pull/10247 >>> from sympy import log >>> from sympy.stats import P, E, variance, Die, Normal, FiniteRV >>> C, Cbar = sympy.symbols('C Cbar') >>> d_i = Die(sympy.symbols('di'), 6) >>> log(P(di, C) / P(di, Cbar)) >>> # >>> PdiC, PdiCbar = sympy.symbols('PdiC, PdiCbar') >>> oddsC = log(PdiC / PdiCbar) >>> sympy.simplify(oddsC) >>> import vtool as vt >>> vt.check_expr_eq(oddsC, log(PdiC) - log(PdiCbar))
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> vdist, ndist = testdata_vn_dists() >>> out = lnbnn_fn(vdist, ndist) >>> result = ut.hz_str('lnbnn = ', ut.repr2(out, precision=2)) >>> print(result) lnbnn = np.array([[0.62, 0.22, 0.03], [0.35, 0.22, 0.01], [0.87, 0.58, 0.27], [0.67, 0.42, 0.25], [0.59, 0.3 , 0.27]])
- wbia.algo.hots.nn_weights.logger = <Logger wbia (WARNING)>
qfx2_ no longer applies due to fgw_thresh. Need to change names in this file
TODO: replace testdata_pre_weight_neighbors with
>>> qreq_, args = plh.testdata_pre('weight_neighbors', defaultdb='testdb1', >>> a=['default:qindex=0:1,dindex=0:5,hackerrors=False'], >>> p=['default:codename=vsmany,bar_l2_on=True,fg_on=False'], verbose=True)
- Type
FIXME
- wbia.algo.hots.nn_weights.loglnbnn_fn(vdist, ndist)[source]
- Ignore:
import vtool as vt vt.check_expr_eq(‘log(d) - log(n)’, ‘log(d / n)’) # True vt.check_expr_eq(‘log(d) / log(n)’, ‘log(d - n)’)
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> vdist, ndist = testdata_vn_dists() >>> out = loglnbnn_fn(vdist, ndist) >>> result = ut.hz_str('loglnbnn = ', ut.repr2(out, precision=2)) >>> print(result) loglnbnn = np.array([[0.48, 0.2 , 0.03], [0.3 , 0.2 , 0.01], [0.63, 0.46, 0.24], [0.51, 0.35, 0.22], [0.46, 0.26, 0.24]])
- wbia.algo.hots.nn_weights.logratio_fn(vdist, ndist)[source]
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> vdist, ndist = testdata_vn_dists() >>> out = normonly_fn(vdist, ndist) >>> result = ut.repr2(out) >>> print(result) np.array([[0.62, 0.62, 0.62], [0.52, 0.52, 0.52], [1. , 1. , 1. ], [0.79, 0.79, 0.79], [0.77, 0.77, 0.77]])
- wbia.algo.hots.nn_weights.mark_name_valid_normalizers(qnid, neighb_topnid, neighb_normnid)[source]
Helper func that allows matches only to the first result for a name
Each query feature finds its K matches and Kn normalizing matches. These are the candidates from which it can choose a set of matches and a single normalizer.
A normalizer is marked as invalid if it belongs to a name that was also in its feature’s candidate matching set.
- Parameters
neighb_topnid (ndarray) – marks the names a feature matches
neighb_normnid (ndarray) – marks the names of the feature normalizers
qnid (int) – query name id
- Returns
neighb_selnorm - index of the selected normalizer for each query feature
- CommandLine:
python -m wbia.algo.hots.nn_weights –exec-mark_name_valid_normalizers
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> qnid = 1 >>> neighb_topnid = np.array([[1, 1, 1, 1, 1], ... [1, 2, 1, 1, 1], ... [1, 2, 2, 3, 1], ... [5, 8, 9, 8, 8], ... [5, 8, 9, 8, 8], ... [6, 6, 9, 6, 8], ... [5, 8, 6, 6, 6], ... [1, 2, 8, 6, 6]], dtype=np.int32) >>> neighb_normnid = np.array([[ 1, 1, 1], ... [ 2, 3, 1], ... [ 2, 3, 1], ... [ 6, 6, 6], ... [ 6, 6, 8], ... [ 2, 6, 6], ... [ 6, 6, 1], ... [ 4, 4, 9]], dtype=np.int32) >>> neighb_selnorm = mark_name_valid_normalizers(qnid, neighb_topnid, neighb_normnid) >>> K = len(neighb_topnid.T) >>> Knorm = len(neighb_normnid.T) >>> neighb_normk_ = neighb_selnorm + (Knorm) # convert form negative to pos indexes >>> result = str(neighb_normk_) >>> print(result) [2 1 2 0 0 0 2 0]
- Ignore:
logger.info(ut.doctest_repr(neighb_normnid, ‘neighb_normnid’, verbose=False)) logger.info(ut.doctest_repr(neighb_topnid, ‘neighb_topnid’, verbose=False))
- wbia.algo.hots.nn_weights.nn_normalized_weight(normweight_fn, nns_list, nnvalid0_list, qreq_)[source]
Generic function to weight nearest neighbors
ratio, lnbnn, and other nearest neighbor based functions use this
- Parameters
normweight_fn (func) – chosen weight function e.g. lnbnn
nns_list (dict) – query descriptor nearest neighbors and distances.
nnvalid0_list (list) – list of neighbors preflagged as valid
qreq (QueryRequest) – hyper-parameters
- Returns
weights_list
- Return type
- CommandLine:
python -m wbia.algo.hots.nn_weights nn_normalized_weight –show
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> #tup = plh.testdata_pre_weight_neighbors('PZ_MTEST') >>> #ibs, qreq_, nns_list, nnvalid0_list = tup >>> qreq_, args = plh.testdata_pre('weight_neighbors', >>> defaultdb='PZ_MTEST') >>> nns_list, nnvalid0_list = args >>> normweight_fn = lnbnn_fn >>> weights_list1, normk_list1 = nn_normalized_weight( >>> normweight_fn, nns_list, nnvalid0_list, qreq_) >>> weights1 = weights_list1[0] >>> nn_normonly_weight = NN_WEIGHT_FUNC_DICT['lnbnn'] >>> weights_list2, normk_list2 = nn_normonly_weight(nns_list, nnvalid0_list, qreq_) >>> weights2 = weights_list2[0] >>> assert np.all(weights1 == weights2) >>> ut.assert_inbounds(weights1.sum(), 100, 510)
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> #tup = plh.testdata_pre_weight_neighbors('PZ_MTEST') >>> qreq_, args = plh.testdata_pre('weight_neighbors', >>> defaultdb='PZ_MTEST') >>> nns_list, nnvalid0_list = args >>> normweight_fn = ratio_fn >>> weights_list1, normk_list1 = nn_normalized_weight(normweight_fn, nns_list, nnvalid0_list, qreq_) >>> weights1 = weights_list1[0] >>> nn_normonly_weight = NN_WEIGHT_FUNC_DICT['ratio'] >>> weights_list2, normk_list2 = nn_normonly_weight(nns_list, nnvalid0_list, qreq_) >>> weights2 = weights_list2[0] >>> assert np.all(weights1 == weights2) >>> ut.assert_inbounds(weights1.sum(), 1500, 4500)
- wbia.algo.hots.nn_weights.normonly_fn(vdist, ndist)[source]
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> vdist, ndist = testdata_vn_dists() >>> out = normonly_fn(vdist, ndist) >>> result = ut.repr2(out) >>> print(result) np.array([[0.62, 0.62, 0.62], [0.52, 0.52, 0.52], [1. , 1. , 1. ], [0.79, 0.79, 0.79], [0.77, 0.77, 0.77]])
- wbia.algo.hots.nn_weights.ratio_fn(vdist, ndist)[source]
- Parameters
vdist (ndarray) – voting array
ndist (ndarray) – normalizing array
- Returns
out
- Return type
ndarray
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> vdist, ndist = testdata_vn_dists() >>> out = ratio_fn(vdist, ndist) >>> result = ut.hz_str('ratio = ', ut.repr2(out, precision=2)) >>> print(result) ratio = np.array([[0. , 0.65, 0.95], [0.33, 0.58, 0.98], [0.13, 0.42, 0.73], [0.15, 0.47, 0.68], [0.23, 0.61, 0.65]])
- wbia.algo.hots.nn_weights.testdata_vn_dists(nfeats=5, K=3)[source]
Test voting and normalizing distances
- Returns
(vdist, ndist) - test voting distances and normalizer distances
- Return type
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.nn_weights import * # NOQA >>> vdist, ndist = testdata_vn_dists() >>> result = (ut.hz_str('vdist = ', ut.repr2(vdist))) + '\n' >>> print(result + (ut.hz_str('ndist = ', ut.repr2(ndist)))) vdist = np.array([[0. , 0.4 , 0.59], [0.17, 0.3 , 0.51], [0.13, 0.42, 0.73], [0.12, 0.37, 0.54], [0.18, 0.47, 0.5 ]]) ndist = np.array([[0.62], [0.52], [1. ], [0.79], [0.77]])
wbia.algo.hots.old_chip_match module
- class wbia.algo.hots.old_chip_match.AlignedListDictProxy(key2_idx, key_list, val_list)[source]
Bases:
utool.util_dev.DictLike_old
simulates a dict when using parallel lists the point of this class is that when there are many instances of this class, then key2_idx can be shared between them. Ideally this class wont be used and will disappear when the parallel lists are being used properly.
DEPCIRATE AlignedListDictProxy’s defaultdict behavior is weird
wbia.algo.hots.pipeline module
Hotspotter pipeline module
- Module Notation and Concepts:
PREFIXES: qaid2_XXX - prefix mapping query chip index to qfx2_XXX - prefix mapping query chip feature index to
nns - a (qfx2_idx, qfx2_dist) tuple
idx - the index into the nnindexers descriptors
qfx - query feature index wrt the query chip
dfx - query feature index wrt the database chip
dist - the distance to a corresponding feature
fm - a list of feature match pairs / correspondences (qfx, dfx)
fsv - a score vector of a corresponding feature
valid - a valid bit for a corresponding feature
PIPELINE_VARS: nns_list - maping from query chip index to nns
qfx2_idx - ranked list of query feature indexes to database feature indexes
qfx2_dist - ranked list of query feature indexes to database feature indexes
- qaid2_norm_weight - mapping from qaid to (qfx2_normweight, qfx2_selnorm)
= qaid2_nnfiltagg[qaid]
- CommandLine:
To see the ouput of a complete pipeline run use
# Set to whichever database you like python main.py –db PZ_MTEST –setdb python main.py –db NAUT_test –setdb python main.py –db testdb1 –setdb
# Then run whichever configuration you like python main.py –query 1 –yes –noqcache -t default:codename=vsmany python main.py –query 1 –yes –noqcache -t default:codename=vsmany_nsum
Todo
Don’t preload the nn-indexer in case the nearest neighbors have already
been computed?
- class wbia.algo.hots.pipeline.Neighbors(qaid, idxs, dists, qfxs)[source]
Bases:
utool.util_dev.NiceRepr
- neighb_dists
- neighb_idxs
- property num_query_feats
- qaid
- qfx_list
- wbia.algo.hots.pipeline.WeightRet_
alias of
wbia.algo.hots.pipeline.weight_ret
- wbia.algo.hots.pipeline.baseline_neighbor_filter(qreq_, nns_list, impossible_daids_list, verbose=False)[source]
Removes matches to self, the same image, or the same name.
- CommandLine:
python -m wbia.algo.hots.pipeline –test-baseline_neighbor_filter
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> qreq_, args = plh.testdata_pre( >>> 'baseline_neighbor_filter', defaultdb='testdb1', >>> qaid_override=[1, 2, 3, 4], >>> daid_override=list(range(1, 11)), >>> p=['default:QRH=False,requery=False,can_match_samename=False'], >>> verbose=True) >>> nns_list, impossible_daids_list = args >>> nnvalid0_list = baseline_neighbor_filter(qreq_, nns_list, >>> impossible_daids_list) >>> ut.assert_eq(len(nnvalid0_list), len(qreq_.qaids)) >>> assert not np.any(nnvalid0_list[0][:, 0]), ( ... 'first col should be all invalid because of self match') >>> assert not np.all(nnvalid0_list[0][:, 1]), ( ... 'second col should have some good matches') >>> ut.assert_inbounds(nnvalid0_list[0].sum(), 1000, 10000)
- wbia.algo.hots.pipeline.build_chipmatches(qreq_, nns_list, nnvalid0_list, filtkey_list, filtweights_list, filtvalids_list, filtnormks_list, verbose=False)[source]
pipeline step 4 - builds sparse chipmatches
Takes the dense feature matches from query feature to (what could be any) database features and builds sparse matching pairs for each annotation to annotation match.
- CommandLine:
python -m wbia build_chipmatches python -m wbia build_chipmatches:0 –show python -m wbia build_chipmatches:1 –show python -m wbia build_chipmatches:2 –show
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> qreq_, args = plh.testdata_pre( >>> 'build_chipmatches', p=['default:codename=vsmany']) >>> (nns_list, nnvalid0_list, filtkey_list, filtweights_list, >>> filtvalids_list, filtnormks_list) = args >>> verbose = True >>> cm_list = build_chipmatches(qreq_, *args, verbose=verbose) >>> # verify results >>> [cm.assert_self(qreq_) for cm in cm_list] >>> cm = cm_list[0] >>> fm = cm.fm_list[cm.daid2_idx[2]] >>> num_matches = len(fm) >>> print('vsmany num_matches = %r' % num_matches) >>> ut.assert_inbounds(num_matches, 500, 2000, 'vsmany nmatches out of bounds') >>> ut.quit_if_noshow() >>> cm.score_annot_csum(qreq_) >>> cm_list[0].ishow_single_annotmatch(qreq_) >>> ut.show_if_requested()
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> # Test to make sure filtering by feature weights works >>> qreq_, args = plh.testdata_pre( >>> 'build_chipmatches', >>> p=['default:codename=vsmany,fgw_thresh=.9']) >>> (nns_list, nnvalid0_list, filtkey_list, filtweights_list, >>> filtvalids_list, filtnormks_list) = args >>> verbose = True >>> cm_list = build_chipmatches(qreq_, *args, verbose=verbose) >>> # verify results >>> [cm.assert_self(qreq_) for cm in cm_list] >>> cm = cm_list[0] >>> fm = cm.fm_list[cm.daid2_idx[2]] >>> num_matches = len(fm) >>> print('num_matches = %r' % num_matches) >>> ut.assert_inbounds(num_matches, 100, 410, 'vsmany nmatches out of bounds') >>> ut.quit_if_noshow() >>> cm.score_annot_csum(qreq_) >>> cm_list[0].ishow_single_annotmatch(qreq_) >>> ut.show_if_requested()
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> qreq_, args = plh.testdata_pre( >>> 'build_chipmatches', p=['default:requery=True'], a='default') >>> (nns_list, nnvalid0_list, filtkey_list, filtweights_list, >>> filtvalids_list, filtnormks_list) = args >>> verbose = True >>> cm_list = build_chipmatches(qreq_, *args, verbose=verbose) >>> # verify results >>> [cm.assert_self(qreq_) for cm in cm_list] >>> scoring.score_chipmatch_list(qreq_, cm_list, 'csum') >>> cm = cm_list[0] >>> for cm in cm_list: >>> # should be positive for LNBNN >>> assert np.all(cm.score_list[np.isfinite(cm.score_list)] >= 0)
- wbia.algo.hots.pipeline.build_impossible_daids_list(qreq_, verbose=False)[source]
- Parameters
qreq (QueryRequest) – query request object with hyper-parameters
- CommandLine:
python -m wbia.algo.hots.pipeline –test-build_impossible_daids_list
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_( >>> defaultdb='testdb1', >>> a='default:species=zebra_plains,qhackerrors=True', >>> p='default:use_k_padding=True,can_match_sameimg=False,can_match_samename=False') >>> impossible_daids_list, Kpad_list = build_impossible_daids_list(qreq_) >>> impossible_daids_list = [x.tolist() for x in impossible_daids_list] >>> vals = ut.dict_subset(locals(), ['impossible_daids_list', 'Kpad_list']) >>> result = ut.repr2(vals, nl=1, explicit=True, nobr=True, strvals=True) >>> print(result) >>> assert np.all(qreq_.qaids == [1, 4, 5, 6]) >>> assert np.all(qreq_.daids == [1, 2, 3, 4, 5, 6]) ... impossible_daids_list=[[1], [4], [5, 6], [5, 6]], Kpad_list=[1, 1, 2, 2],
- wbia.algo.hots.pipeline.cachemiss_nn_compute_fn(flags_list, qreq_, Kpad_list, impossible_daids_list, K, Knorm, requery, verbose)[source]
Logic for computing neighbors if there is a cache miss
>>> flags_list = [True] * len(Kpad_list) >>> flags_list = [True, False, True]
- wbia.algo.hots.pipeline.compute_matching_dlen_extent(qreq_, fm_list, kpts_list)[source]
helper for spatial verification, computes the squared diagonal length of matching chips
- CommandLine:
python -m wbia.algo.hots.pipeline –test-compute_matching_dlen_extent
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST') >>> verbose = True >>> cm = cm_list[0] >>> cm.set_cannonical_annot_score(cm.get_num_matches_list()) >>> cm.sortself() >>> fm_list = cm.fm_list >>> kpts_list = qreq_.get_qreq_dannot_kpts(cm.daid_list.tolist()) >>> topx2_dlen_sqrd = compute_matching_dlen_extent(qreq_, fm_list, kpts_list) >>> ut.assert_inbounds(np.sqrt(topx2_dlen_sqrd)[0:5], 600, 1500)
- wbia.algo.hots.pipeline.get_sparse_matchinfo_nonagg(qreq_, nns, neighb_valid0, neighb_score_list, neighb_valid_list, neighb_normk_list, Knorm, fsv_col_lbls)[source]
builds sparse iterator that generates feature match pairs, scores, and ranks
- Returns
- vmt a tuple of corresponding lists. Each item in the
list corresponds to a daid, dfx, scorevec, rank, norm_aid, norm_fx…
- Return type
- CommandLine:
python -m wbia.algo.hots.pipeline –test-get_sparse_matchinfo_nonagg –show python -m wbia.algo.hots.pipeline –test-get_sparse_matchinfo_nonagg:1 –show
utprof.py -m wbia.algo.hots.pipeline –test-get_sparse_matchinfo_nonagg
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> verbose = True >>> qreq_, qaid, daid, args = plh.testdata_sparse_matchinfo_nonagg( >>> defaultdb='PZ_MTEST', p=['default:Knorm=3,normalizer_rule=name,const_on=True,ratio_thresh=.2,sqrd_dist_on=True']) >>> nns, neighb_valid0, neighb_score_list, neighb_valid_list, neighb_normk_list, Knorm, fsv_col_lbls = args >>> cm = get_sparse_matchinfo_nonagg(qreq_, *args) >>> qannot = qreq_.ibs.annots([qaid], config=qreq_.qparams) >>> dannot = qreq_.ibs.annots(cm.daid_list, config=qreq_.qparams) >>> cm.assert_self(verbose=False) >>> ut.quit_if_noshow() >>> cm.score_annot_csum(qreq_) >>> cm.show_single_annotmatch(qreq_) >>> ut.show_if_requested()
- wbia.algo.hots.pipeline.nearest_neighbor_cacheid2(qreq_, Kpad_list)[source]
Returns a hacky cacheid for neighbor configs. DEPRICATE: This will be replaced by dtool caching
- Parameters
qreq (QueryRequest) – query request object with hyper-parameters
Kpad_list (list) –
- Returns
(nn_mid_cacheid_list, nn_cachedir)
- Return type
- CommandLine:
python -m wbia.algo.hots.pipeline –exec-nearest_neighbor_cacheid2 python -m wbia.algo.hots.pipeline –exec-nearest_neighbor_cacheid2 –superstrict
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> import wbia >>> verbose = True >>> cfgdict = dict(K=4, Knorm=1, checks=800, use_k_padding=False) >>> # test 1 >>> p = 'default' + ut.get_cfg_lbl(cfgdict) >>> qreq_ = wbia.testdata_qreq_( >>> defaultdb='testdb1', p=[p], qaid_override=[1, 2], >>> daid_override=[1, 2, 3, 4, 5]) >>> locals_ = plh.testrun_pipeline_upto(qreq_, 'nearest_neighbors') >>> Kpad_list, = ut.dict_take(locals_, ['Kpad_list']) >>> tup = nearest_neighbor_cacheid2(qreq_, Kpad_list) >>> (nn_cachedir, nn_mid_cacheid_list) = tup >>> result1 = 'nn_mid_cacheid_list1 = ' + ut.repr2(nn_mid_cacheid_list, nl=1) >>> # test 2 >>> cfgdict2 = dict(K=2, Knorm=3, use_k_padding=True) >>> p2 = 'default' + ut.get_cfg_lbl(cfgdict) >>> ibs = qreq_.ibs >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', p=[p2], qaid_override=[1, 2], daid_override=[1, 2, 3, 4, 5]) >>> locals_ = plh.testrun_pipeline_upto(qreq_, 'nearest_neighbors') >>> Kpad_list, = ut.dict_take(locals_, ['Kpad_list']) >>> tup = nearest_neighbor_cacheid2(qreq_, Kpad_list) >>> (nn_cachedir, nn_mid_cacheid_list) = tup >>> result2 = 'nn_mid_cacheid_list2 = ' + ut.repr2(nn_mid_cacheid_list, nl=1) >>> result = result1 + '\n' + result2 >>> print(result) nn_mid_cacheid_list1 = [ 'nnobj_8687dcb6-1f1f-fdd3-8b72-8f36f9f41905_DVUUIDS((5)oavtblnlrtocnrpm)_NN(single,cks800)_Chip(sz700,maxwh)_Feat(hesaff+sift)_FLANN(8_kdtrees)_truek6', 'nnobj_a2aef668-20c1-1897-d8f3-09a47a73f26a_DVUUIDS((5)oavtblnlrtocnrpm)_NN(single,cks800)_Chip(sz700,maxwh)_Feat(hesaff+sift)_FLANN(8_kdtrees)_truek6', ] nn_mid_cacheid_list2 = [ 'nnobj_8687dcb6-1f1f-fdd3-8b72-8f36f9f41905_DVUUIDS((5)oavtblnlrtocnrpm)_NN(single,cks800)_Chip(sz700,maxwh)_Feat(hesaff+sift)_FLANN(8_kdtrees)_truek6', 'nnobj_a2aef668-20c1-1897-d8f3-09a47a73f26a_DVUUIDS((5)oavtblnlrtocnrpm)_NN(single,cks800)_Chip(sz700,maxwh)_Feat(hesaff+sift)_FLANN(8_kdtrees)_truek6', ]
- wbia.algo.hots.pipeline.nearest_neighbors(qreq_, Kpad_list, impossible_daids_list=None, verbose=False)[source]
Plain Nearest Neighbors Tries to load nearest neighbors from a cache instead of recomputing them.
- CommandLine:
python -m wbia.algo.hots.pipeline –test-nearest_neighbors python -m wbia.algo.hots.pipeline –test-nearest_neighbors –db PZ_MTEST –qaids=1:100 utprof.py -m wbia.algo.hots.pipeline –test-nearest_neighbors –db PZ_MTEST –qaids=1:100
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> import wbia >>> verbose = True >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', qaid_override=[1]) >>> locals_ = plh.testrun_pipeline_upto(qreq_, 'nearest_neighbors') >>> Kpad_list, impossible_daids_list = ut.dict_take( >>> locals_, ['Kpad_list', 'impossible_daids_list']) >>> nns_list = nearest_neighbors(qreq_, Kpad_list, impossible_daids_list, >>> verbose=verbose) >>> qaid = qreq_.internal_qaids[0] >>> nn = nns_list[0] >>> (qfx2_idx, qfx2_dist) = nn >>> num_neighbors = Kpad_list[0] + qreq_.qparams.K + qreq_.qparams.Knorm >>> # Assert nns tuple is valid >>> ut.assert_eq(qfx2_idx.shape, qfx2_dist.shape) >>> ut.assert_eq(qfx2_idx.shape[1], num_neighbors) >>> ut.assert_inbounds(qfx2_idx.shape[0], 1000, 3000)
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> import wbia >>> verbose = True >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', qaid_override=[1]) >>> locals_ = plh.testrun_pipeline_upto(qreq_, 'nearest_neighbors') >>> Kpad_list, impossible_daids_list = ut.dict_take( >>> locals_, ['Kpad_list', 'impossible_daids_list']) >>> nns_list = nearest_neighbors(qreq_, Kpad_list, impossible_daids_list, >>> verbose=verbose) >>> qaid = qreq_.internal_qaids[0] >>> nn = nns_list[0] >>> (qfx2_idx, qfx2_dist) = nn >>> num_neighbors = Kpad_list[0] + qreq_.qparams.K + qreq_.qparams.Knorm >>> # Assert nns tuple is valid >>> ut.assert_eq(qfx2_idx.shape, qfx2_dist.shape) >>> ut.assert_eq(qfx2_idx.shape[1], num_neighbors) >>> ut.assert_inbounds(qfx2_idx.shape[0], 1000, 3000)
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> import wbia >>> verbose = True >>> custom_nid_lookup = {a: a for a in range(14)} >>> qreq1_ = wbia.testdata_qreq_( >>> defaultdb='testdb1', t=['default:K=2,requery=True,can_match_samename=False'], >>> daid_override=[2, 3, 4, 5, 6, 7, 8], >>> qaid_override=[2, 5, 1], custom_nid_lookup=custom_nid_lookup) >>> locals_ = plh.testrun_pipeline_upto(qreq1_, 'nearest_neighbors') >>> Kpad_list, impossible_daids_list = ut.dict_take( >>> locals_, ['Kpad_list', 'impossible_daids_list']) >>> nns_list1 = nearest_neighbors(qreq1_, Kpad_list, impossible_daids_list, >>> verbose=verbose) >>> nn1 = nns_list1[0] >>> nnvalid0_list1 = baseline_neighbor_filter(qreq1_, nns_list1, >>> impossible_daids_list) >>> assert np.all(nnvalid0_list1[0]), ( >>> 'requery should never produce impossible results') >>> # Compare versus not using requery >>> qreq2_ = wbia.testdata_qreq_( >>> defaultdb='testdb1', t=['default:K=2,requery=False'], >>> daid_override=[1, 2, 3, 4, 5, 6, 7, 8], >>> qaid_override=[2, 5, 1]) >>> locals_ = plh.testrun_pipeline_upto(qreq2_, 'nearest_neighbors') >>> Kpad_list, impossible_daids_list = ut.dict_take( >>> locals_, ['Kpad_list', 'impossible_daids_list']) >>> nns_list2 = nearest_neighbors(qreq2_, Kpad_list, impossible_daids_list, >>> verbose=verbose) >>> nn2 = nns_list2[0] >>> nn1.neighb_dists >>> nn2.neighb_dists
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> import wbia >>> verbose = True >>> qreq1_ = wbia.testdata_qreq_( >>> defaultdb='testdb1', t=['default:K=5,requery=True,can_match_samename=False'], >>> daid_override=[2, 3, 4, 5, 6, 7, 8], >>> qaid_override=[2, 5, 1]) >>> locals_ = plh.testrun_pipeline_upto(qreq1_, 'nearest_neighbors') >>> Kpad_list, impossible_daids_list = ut.dict_take( >>> locals_, ['Kpad_list', 'impossible_daids_list']) >>> nns_list1 = nearest_neighbors(qreq1_, Kpad_list, impossible_daids_list, >>> verbose=verbose) >>> nn1 = nns_list1[0] >>> nnvalid0_list1 = baseline_neighbor_filter(qreq1_, nns_list1, >>> impossible_daids_list) >>> assert np.all(nnvalid0_list1[0]), 'should always be valid'
- wbia.algo.hots.pipeline.request_wbia_query_L0(ibs, qreq_, verbose=False)[source]
Driver logic of query pipeline
Note
Make sure _pipeline_helpres.testrun_pipeline_upto reflects what happens in this function.
- Parameters
ibs (wbia.IBEISController) – IBEIS database object to be queried. technically this object already lives inside of qreq_.
qreq (wbia.QueryRequest) – hyper-parameters. use
ibs.new_query_request
to create one
- Returns
cm_list containing
wbia.ChipMatch
objects- Return type
- CommandLine:
python -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 –show python -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:1 –show
python -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 –db testdb1 –qaid 325 python -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 –db testdb3 –qaid 325 # background match python -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 –db NNP_Master3 –qaid 12838
python -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 python -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 –db PZ_MTEST -a timectrl:qindex=0:256 python -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 –db PZ_Master1 -a timectrl:qindex=0:256 utprof.py -m wbia.algo.hots.pipeline –test-request_wbia_query_L0:0 –db PZ_Master1 -a timectrl:qindex=0:256
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> import wbia >>> qreq_ = wbia.init.main_helpers.testdata_qreq_(a=['default:qindex=0:2,dindex=0:10']) >>> ibs = qreq_.ibs >>> print(qreq_.qparams.query_cfgstr) >>> verbose = True >>> cm_list = request_wbia_query_L0(ibs, qreq_, verbose=verbose) >>> cm = cm_list[0] >>> ut.quit_if_noshow() >>> cm.ishow_analysis(qreq_, fnum=0, make_figtitle=True) >>> ut.show_if_requested()
- wbia.algo.hots.pipeline.spatial_verification(qreq_, cm_list_FILT, verbose=False)[source]
pipeline step 5 - spatially verify feature matches
- Returns
cm_listSVER - new list of spatially verified chipmatches
- Return type
- CommandLine:
python -m wbia.algo.hots.pipeline –test-spatial_verification –show python -m wbia.algo.hots.pipeline –test-spatial_verification –show –qaid 1 python -m wbia.algo.hots.pipeline –test-spatial_verification:0
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[18]) >>> scoring.score_chipmatch_list(qreq_, cm_list, qreq_.qparams.prescore_method) # HACK >>> cm = cm_list[0] >>> top_nids = cm.get_top_nids(6) >>> verbose = True >>> cm_list_SVER = spatial_verification(qreq_, cm_list) >>> # Test Results >>> cmSV = cm_list_SVER[0] >>> scoring.score_chipmatch_list(qreq_, cm_list_SVER, qreq_.qparams.score_method) # HACK >>> top_nids_SV = cmSV.get_top_nids(6) >>> cm.print_csv(sort=True) >>> cmSV.print_csv(sort=False) >>> gt_daids = np.intersect1d(cm.get_groundtruth_daids(), cmSV.get_groundtruth_daids()) >>> fm_list = cm.get_annot_fm(gt_daids) >>> fmSV_list = cmSV.get_annot_fm(gt_daids) >>> maplen = lambda list_: np.array(list(map(len, list_))) >>> assert len(gt_daids) > 0, 'ground truth did not survive' >>> ut.assert_lessthan(maplen(fmSV_list), maplen(fm_list)), 'feature matches were not filtered' >>> ut.quit_if_noshow() >>> cmSV.show_daids_matches(qreq_, gt_daids) >>> import wbia.plottool as pt >>> #homog_tup = (refined_inliers, H) >>> #aff_tup = (aff_inliers, Aff) >>> #pt.draw_sv.show_sv(rchip1, rchip2, kpts1, kpts2, fm, aff_tup=aff_tup, homog_tup=homog_tup, refine_method=refine_method) >>> ut.show_if_requested()
- wbia.algo.hots.pipeline.sver_single_chipmatch(qreq_, cm, verbose=False)[source]
Spatially verifies a shortlist of a single chipmatch
TODO: move to chip match?
loops over a shortlist of results for a specific query annotation
- Parameters
qreq (QueryRequest) – query request object with hyper-parameters
cm (ChipMatch) –
- Returns
cmSV
- Return type
wbia.ChipMatch
- CommandLine:
- python -m wbia draw_rank_cmc –db PZ_Master1 –show
-t best:refine_method=[homog,affine,cv2-homog,cv2-ransac-homog,cv2-lmeds-homog] -a timectrlhard —acfginfo –veryverbtd
- python -m wbia draw_rank_cmc –db PZ_Master1 –show
-t best:refine_method=[homog,cv2-lmeds-homog],full_homog_checks=[True,False] -a timectrlhard —acfginfo –veryverbtd
- python -m wbia sver_single_chipmatch –show
-t default:full_homog_checks=True -a default –qaid 18
- python -m wbia sver_single_chipmatch –show
-t default:refine_method=affine -a default –qaid 18
- python -m wbia sver_single_chipmatch –show
-t default:refine_method=cv2-homog -a default –qaid 18
- python -m wbia sver_single_chipmatch –show
-t default:refine_method=cv2-homog,full_homog_checks=True -a default –qaid 18
- python -m wbia sver_single_chipmatch –show
-t default:refine_method=cv2-homog,full_homog_checks=False -a default –qaid 18
- python -m wbia sver_single_chipmatch –show
-t default:refine_method=cv2-lmeds-homog,full_homog_checks=False -a default –qaid 18
- python -m wbia sver_single_chipmatch –show
-t default:refine_method=cv2-ransac-homog,full_homog_checks=False -a default –qaid 18
- python -m wbia sver_single_chipmatch –show
-t default:full_homog_checks=False -a default –qaid 18
python -m wbia sver_single_chipmatch –show –qaid=18 –y=0 python -m wbia sver_single_chipmatch –show –qaid=18 –y=1
Example
>>> # DISABLE_DOCTEST >>> # Visualization >>> from wbia.algo.hots.pipeline import * # NOQA >>> qreq_, args = plh.testdata_pre('spatial_verification', defaultdb='PZ_MTEST') #, qaid_list=[18]) >>> cm_list = args.cm_list_FILT >>> ibs = qreq_.ibs >>> cm = cm_list[0] >>> scoring.score_chipmatch_list(qreq_, cm_list, qreq_.qparams.prescore_method) # HACK >>> #locals_ = ut.exec_func_src(sver_single_chipmatch, key_list=['svtup_list'], sentinal='# <SENTINAL>') >>> #svtup_list1, = locals_ >>> verbose = True >>> source = ut.get_func_sourcecode(sver_single_chipmatch, stripdef=True, strip_docstr=True) >>> source = ut.replace_between_tags(source, '', '# <SENTINAL>', '# </SENTINAL>') >>> globals_ = globals().copy() >>> exec(source, globals_) >>> svtup_list = globals_['svtup_list'] >>> gt_daids = cm.get_groundtruth_daids() >>> x = ut.get_argval('--y', type_=int, default=0) >>> #print('x = %r' % (x,)) >>> #daid = daids[x % len(daids)] >>> notnone_list = ut.not_list(ut.flag_None_items(svtup_list)) >>> valid_idxs = np.where(notnone_list) >>> valid_daids = cm.daid_list[valid_idxs] >>> assert len(valid_daids) > 0, 'cannot spatially verify' >>> valid_gt_daids = np.intersect1d(gt_daids, valid_daids) >>> #assert len(valid_gt_daids) == 0, 'no sver groundtruth' >>> daid = valid_gt_daids[x] if len(valid_gt_daids) > 0 else valid_daids[x] >>> idx = cm.daid2_idx[daid] >>> svtup = svtup_list[idx] >>> assert svtup is not None, 'SV TUP IS NONE' >>> refined_inliers, refined_errors, H = svtup[0:3] >>> aff_inliers, aff_errors, Aff = svtup[3:6] >>> homog_tup = (refined_inliers, H) >>> aff_tup = (aff_inliers, Aff) >>> fm = cm.fm_list[idx] >>> aid1 = cm.qaid >>> aid2 = daid >>> rchip1, = ibs.get_annot_chips([aid1], config2_=qreq_.extern_query_config2) >>> kpts1, = ibs.get_annot_kpts([aid1], config2_=qreq_.extern_query_config2) >>> rchip2, = ibs.get_annot_chips([aid2], config2_=qreq_.extern_data_config2) >>> kpts2, = ibs.get_annot_kpts([aid2], config2_=qreq_.extern_data_config2) >>> import wbia.plottool as pt >>> import matplotlib as mpl >>> from wbia.scripts.thesis import TMP_RC >>> mpl.rcParams.update(TMP_RC) >>> show_aff = not ut.get_argflag('--noaff') >>> refine_method = qreq_.qparams.refine_method if not ut.get_argflag('--norefinelbl') else '' >>> pt.draw_sv.show_sv(rchip1, rchip2, kpts1, kpts2, fm, aff_tup=aff_tup, >>> homog_tup=homog_tup, show_aff=show_aff, >>> refine_method=refine_method) >>> ut.show_if_requested()
- wbia.algo.hots.pipeline.weight_neighbors(qreq_, nns_list, nnvalid0_list, verbose=False)[source]
pipeline step 3 - assigns weights to feature matches based on the active filter list
- CommandLine:
python -m wbia.algo.hots.pipeline –test-weight_neighbors python -m wbia.algo.hots.pipeline –test-weight_neighbors:0 –verbose –verbtd –ainfo –nocache –veryverbose python -m wbia.algo.hots.pipeline –test-weight_neighbors:0 –show python -m wbia.algo.hots.pipeline –test-weight_neighbors:1 –show
python -m wbia.algo.hots.pipeline –test-weight_neighbors:0 –show -t default:lnbnn_normer=lnbnn_fg_0.9__featscore,lnbnn_norm_thresh=.9
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> qreq_, args = plh.testdata_pre( >>> 'weight_neighbors', defaultdb='testdb1', >>> a=['default:qindex=0:3,dindex=0:5,hackerrors=False'], >>> p=['default:codename=vsmany,bar_l2_on=True,fg_on=False'], verbose=True) >>> nns_list, nnvalid0_list = args >>> verbose = True >>> weight_ret = weight_neighbors(qreq_, nns_list, nnvalid0_list, verbose) >>> filtkey_list, filtweights_list, filtvalids_list, filtnormks_list = weight_ret >>> import wbia.plottool as pt >>> verbose = True >>> cm_list = build_chipmatches( >>> qreq_, nns_list, nnvalid0_list, filtkey_list, filtweights_list, >>> filtvalids_list, filtnormks_list, verbose=verbose) >>> ut.quit_if_noshow() >>> cm = cm_list[0] >>> cm.score_name_nsum(qreq_) >>> cm.ishow_analysis(qreq_) >>> ut.show_if_requested()
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.pipeline import * # NOQA >>> qreq_, args = plh.testdata_pre( >>> 'weight_neighbors', defaultdb='testdb1', >>> a=['default:qindex=0:3,dindex=0:5,hackerrors=False'], >>> p=['default:codename=vsmany,bar_l2_on=True,fg_on=False'], verbose=True) >>> nns_list, nnvalid0_list = args >>> verbose = True >>> weight_ret = weight_neighbors(qreq_, nns_list, nnvalid0_list, verbose) >>> filtkey_list, filtweights_list, filtvalids_list, filtnormks_list = weight_ret >>> nInternAids = len(qreq_.get_internal_qaids()) >>> nFiltKeys = len(filtkey_list) >>> filtweight_depth = ut.depth_profile(filtweights_list) >>> filtvalid_depth = ut.depth_profile(filtvalids_list) >>> ut.assert_eq(nInternAids, len(filtweights_list)) >>> ut.assert_eq(nInternAids, len(filtvalids_list)) >>> ut.assert_eq(ut.get_list_column(filtweight_depth, 0), [nFiltKeys] * nInternAids) >>> ut.assert_eq(filtvalid_depth, (nInternAids, nFiltKeys)) >>> ut.assert_eq(filtvalids_list, [[None, None], [None, None], [None, None]]) >>> ut.assert_eq(filtkey_list, [hstypes.FiltKeys.LNBNN, hstypes.FiltKeys.BARL2]) >>> ut.quit_if_noshow() >>> import wbia.plottool as pt >>> verbose = True >>> cm_list = build_chipmatches( >>> qreq_, nns_list, nnvalid0_list, filtkey_list, filtweights_list, >>> filtvalids_list, filtnormks_list, verbose=verbose) >>> cm = cm_list[0] >>> cm.score_name_nsum(qreq_) >>> cm.ishow_analysis(qreq_) >>> ut.show_if_requested()
wbia.algo.hots.query_params module
- class wbia.algo.hots.query_params.QueryParams(query_cfg=None, cfgdict=None)[source]
Bases:
collections.abc.Mapping
wbia.algo.hots.query_request module
Todo
replace with dtool Rename to IdentifyRequest
python -m utool.util_inspect check_module_usage –pat=”query_request.py”
- class wbia.algo.hots.query_request.QueryRequest[source]
Bases:
utool.util_dev.NiceRepr
Request object for pipline parameter run
- property daids
These are the users daids in vsone mode
- property dannots
external query annotation objects
- ensure_chips(verbose=True, num_retries=1)[source]
ensure chips are computed (used in expt, not used in pipeline)
- CommandLine:
python -m wbia.algo.hots.query_request –test-ensure_chips
Example
>>> # ENABLE_DOCTEST >>> # Delete chips (accidentally), then try to run a query >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> ibs = wbia.opendb(defaultdb='testdb1') >>> daids = ibs.get_valid_aids()[0:3] >>> qaids = ibs.get_valid_aids()[0:6] >>> qreq_ = ibs.new_query_request(qaids, daids) >>> verbose = True >>> num_retries = 1 >>> qchip_fpaths = ibs.get_annot_chip_fpath(qaids, config2_=qreq_.extern_query_config2) >>> dchip_fpaths = ibs.get_annot_chip_fpath(daids, config2_=qreq_.extern_data_config2) >>> ut.remove_file_list(qchip_fpaths) >>> ut.remove_file_list(dchip_fpaths) >>> result = qreq_.ensure_chips(verbose, num_retries) >>> print(result)
- ensure_features(verbose=True, prog_hook=None)[source]
ensure features are computed :param verbose: verbosity flag(default = True) :type verbose: bool
- CommandLine:
python -m wbia.algo.hots.query_request –test-ensure_features
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> ibs = wbia.opendb(defaultdb='testdb1') >>> daids = ibs.get_valid_aids()[0:2] >>> qaids = ibs.get_valid_aids()[0:3] >>> qreq_ = ibs.new_query_request(qaids, daids) >>> ibs.delete_annot_feats(qaids, config2_=qreq_.extern_query_config2) # Remove the chips >>> ut.remove_file_list(ibs.get_annot_chip_fpath(qaids, config2_=qreq_.extern_query_config2)) >>> verbose = True >>> result = qreq_.ensure_features(verbose) >>> print(result)
- execute(qaids=None, prog_hook=None, use_cache=None, use_supercache=None, invalidate_supercache=None)[source]
Runs the hotspotter pipeline and returns chip match objects.
- CommandLine:
python -m wbia.algo.hots.query_request execute –show
Example
>>> # SLOW_DOCTEST >>> # xdoctest: +SKIP >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_() >>> cm_list = qreq_.execute() >>> ut.quit_if_noshow() >>> cm = cm_list[0] >>> cm.ishow_analysis(qreq_) >>> ut.show_if_requested()
- property extern_data_config2
- property extern_query_config2
- get_cfgstr(with_input=False, with_data=True, with_pipe=True, hash_pipe=False)[source]
main cfgstring used to identify the ‘querytype’ FIXME: name params + data
Todo
rename query_cfgstr to pipe_cfgstr or pipeline_cfgstr EVERYWHERE
- CommandLine:
python -m wbia.algo.hots.query_request –exec-get_cfgstr
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', >>> p='default:fgw_thresh=.3', >>> a='default:species=zebra_plains') >>> with_input = True >>> cfgstr = qreq_.get_cfgstr(with_input) >>> result = ('cfgstr = %s' % (str(cfgstr),)) >>> print(result)
- get_chipmatch_fpaths(qaid_list, super_qres_cache=False)[source]
Generates chipmatch paths for input query annotation rowids
- get_data_hashid()[source]
- CommandLine:
python -m wbia.algo.hots.query_request –exec-QueryRequest.get_query_hashid –show
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_() >>> data_hashid = qreq_.get_data_hashid() >>> result = ('data_hashid = %s' % (ut.repr2(data_hashid),)) >>> print(result)
- get_full_cfgstr()[source]
main cfgstring used to identify the ‘querytype’ FIXME: name params + data + query
- get_qreq_pcc_hashid(aids, prefix='', with_nids=False)[source]
Gets a combined hash of a group of aids. Each aid hash represents itself in the context of the query database.
only considers grouping of database names
- CommandLine:
python -m wbia.algo.hots.query_request –test-get_qreq_pcc_hashid:0
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> p = ['default:K=2,nameknn=True'] >>> defaultdb = 'testdb1' >>> # Test that UUIDS change when you change the name lookup >>> new_ = ut.partial(wbia.testdata_qreq_, defaultdb=defaultdb, p=p, >>> verbose=False) >>> # All diff names >>> qreq1 = new_(daid_override=[2, 3, 5, 6], >>> qaid_override=[1, 2, 4], >>> custom_nid_lookup={a: a for a in range(14)}) >>> # All same names >>> qreq2 = new_(daid_override=[2, 3, 5, 6], >>> qaid_override=[1, 2, 4], >>> custom_nid_lookup={a: 1 for a in range(14)}) >>> # Change the PCC, removing a query (data should NOT change) >>> # because the thing being queried against is the same >>> qreq3 = new_(daid_override=[2, 3, 5, 6], >>> qaid_override=[1, 2], >>> custom_nid_lookup={a: 1 for a in range(14)}) >>> # Now remove a database object (query SHOULD change) >>> # because the results are different depending on >>> # nameing of database (maybe they shouldnt change...) >>> qreq4 = new_(daid_override=[2, 3, 6], >>> qaid_override=[1, 2, 4], >>> custom_nid_lookup={a: 1 for a in range(14)}) >>> print(qreq1.get_cfgstr(with_input=True, with_pipe=False)) >>> print(qreq2.get_cfgstr(with_input=True, with_pipe=False)) >>> print(qreq3.get_cfgstr(with_input=True, with_pipe=False)) >>> print(qreq4.get_cfgstr(with_input=True, with_pipe=False)) >>> assert qreq3.get_data_hashid() == qreq2.get_data_hashid() >>> assert qreq1.get_data_hashid() != qreq2.get_data_hashid()
- get_query_hashid()[source]
- CommandLine:
python -m wbia.algo.hots.query_request –exec-QueryRequest.get_query_hashid –show
Example
>>> # DISABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_() >>> query_hashid = qreq_.get_query_hashid() >>> result = ('query_hashid = %s' % (ut.repr2(query_hashid),)) >>> print(result)
- property internal_dannots
- property internal_qannots
- lazy_preload(prog_hook=None, verbose=True)[source]
feature weights and normalizers should be loaded before vsone queries are issued. They do not depened only on qparams
Load non-query specific normalizers / weights
- classmethod new_query_request(qaid_list, daid_list, qparams, qresdir, ibs, query_config2_, data_config2_, _indexer_request_params, custom_nid_lookup=None)[source]
old way of calling new
- Parameters
qaid_list (list) –
daid_list (list) –
qparams (QueryParams) – query hyper-parameters
qresdir (str) –
ibs (wbia.IBEISController) – image analysis api
_indexer_request_params (dict) –
- Returns
wbia.QueryRequest
- property qaids
These are the users qaids in vsone mode
- property qannots
internal query annotation objects
- rrr(verbose=True, reload_module=True)
special class reloading function This function is often injected as rrr of classes
- set_external_qaid_mask(masked_qaid_list)[source]
- Parameters
qaid_list (list) –
- CommandLine:
python -m wbia.algo.hots.query_request –test-set_external_qaid_mask
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> ibs = wbia.opendb(db='testdb1') >>> qaid_list = [1, 2, 3, 4, 5] >>> daid_list = [1, 2, 3, 4, 5] >>> qreq_ = ibs.new_query_request(qaid_list, daid_list) >>> masked_qaid_list = [2, 4, 5] >>> qreq_.set_external_qaid_mask(masked_qaid_list) >>> result = np.array_str(qreq_.qaids) >>> print(result) [1 3]
- set_internal_masked_daids(masked_daid_list)[source]
used by the pipeline to execute a subset of the query request without modifying important state
- set_internal_masked_qaids(masked_qaid_list)[source]
used by the pipeline to execute a subset of the query request without modifying important state
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> import utool as ut >>> import wbia >>> qaid_list = [1, 2, 3, 4] >>> daid_list = [1, 2, 3, 4] >>> qreq_ = wbia.testdata_qreq_(qaid_override=qaid_list, daid_override=daid_list, p='default:sv_on=True') >>> qaids = qreq_.get_internal_qaids() >>> ut.assert_lists_eq(qaid_list, qaids) >>> masked_qaid_list = [1, 2, 3,] >>> qreq_.set_internal_masked_qaids(masked_qaid_list) >>> new_internal_aids = qreq_.get_internal_qaids() >>> ut.assert_lists_eq(new_internal_aids, [4])
- shallowcopy(qaids=None)[source]
Creates a copy of qreq with the same qparams object and a subset of the qx and dx objects. used to generate chunks of vsmany queries
- CommandLine:
python -m wbia.algo.hots.query_request QueryRequest.shallowcopy
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(default_qaids=[1, 2]) >>> qreq2_ = qreq_.shallowcopy(qaids=1) >>> assert qreq_.daids is qreq2_.daids, 'should be the same' >>> assert len(qreq_.qaids) != len(qreq2_.qaids), 'should be diff' >>> #assert qreq_.metadata is not qreq2_.metadata
- wbia.algo.hots.query_request.apply_species_with_detector_hack(ibs, cfgdict, qaids, daids, verbose=None)[source]
HACK turns of featweights if they cannot be applied
- wbia.algo.hots.query_request.cfg_deepcopy_test()[source]
TESTING FUNCTION
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> result = cfg_deepcopy_test() >>> print(result)
- wbia.algo.hots.query_request.new_wbia_query_request(ibs, qaid_list, daid_list, cfgdict=None, verbose=None, unique_species=None, use_memcache=True, query_cfg=None, custom_nid_lookup=None)[source]
wbia entry point to create a new query request object
- Parameters
ibs (wbia.IBEISController) – image analysis api
qaid_list (list) – query ids
daid_list (list) – database ids
cfgdict (dict) – pipeline dictionary config
query_cfg (dtool.Config) – Pipeline Config Object
unique_species (None) – (default = None)
use_memcache (bool) – (default = True)
verbose (bool) – verbosity flag(default = True)
- Returns
wbia.QueryRequest
- CommandLine:
python -m wbia.algo.hots.query_request –test-new_wbia_query_request:0 python -m wbia.algo.hots.query_request –test-new_wbia_query_request:1
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> ibs, qaid_list, daid_list = testdata_newqreq('PZ_MTEST') >>> unique_species = None >>> verbose = ut.NOT_QUIET >>> cfgdict = {'sv_on': False, 'fg_on': True} # 'fw_detector': 'rf'} >>> qreq_ = new_wbia_query_request(ibs, qaid_list, daid_list, cfgdict=cfgdict) >>> print(qreq_.get_cfgstr()) >>> assert qreq_.qparams.sv_on is False, ( ... 'qreq_.qparams.sv_on = %r ' % qreq_.qparams.sv_on) >>> result = ibs.get_dbname() + qreq_.get_data_hashid() >>> print(result) PZ_MTEST_DPCC_UUIDS-a5-n2-vpkyggtpzbqbecuq
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> ibs, qaid_list, daid_list = testdata_newqreq('NAUT_test') >>> unique_species = None >>> verbose = ut.NOT_QUIET >>> cfgdict = {'sv_on': True, 'fg_on': True} >>> qreq_ = new_wbia_query_request(ibs, qaid_list, daid_list, cfgdict=cfgdict) >>> assert qreq_.query_config2_.featweight_enabled is False >>> # Featweight should be off because there is no Naut detector >>> print(qreq_.qparams.query_cfgstr) >>> assert qreq_.qparams.sv_on is True, ( ... 'qreq_.qparams.sv_on = %r ' % qreq_.qparams.sv_on) >>> result = ibs.get_dbname() + qreq_.get_data_hashid() >>> print(result) NAUT_test_DPCC_UUIDS-a5-n3-rtuyggvzpczvmjcw
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.query_request import * # NOQA >>> ibs, qaid_list, daid_list = testdata_newqreq('PZ_MTEST') >>> unique_species = None >>> verbose = ut.NOT_QUIET >>> cfgdict = {'sv_on': False, 'query_rotation_heuristic': True} >>> qreq_ = new_wbia_query_request(ibs, qaid_list, daid_list, cfgdict=cfgdict) >>> # Featweight should be off because there is no Naut detector >>> print(qreq_.qparams.query_cfgstr) >>> assert qreq_.qparams.sv_on is False, ( ... 'qreq_.qparams.sv_on = %r ' % qreq_.qparams.sv_on) >>> result = ibs.get_dbname() + qreq_.get_data_hashid() >>> print(result) PZ_MTEST_DPCC_UUIDS-a5-n2-vpkyggtpzbqbecuq
- Ignore:
# This is supposed to be the beginings of the code to transition the # pipeline configuration into the new minimal dict based structure that # supports different configs for query and database annotations. dcfg = qreq_.extern_data_config2 qcfg = qreq_.extern_query_config2 ut.dict_intersection(qcfg.__dict__, dcfg.__dict__) from wbia.expt import cfghelpers cfg_list = [qcfg.__dict__, dcfg.__dict__] nonvaried_cfg, varied_cfg_list = ut.partition_varied_cfg_list(
cfg_list, recursive=True)
qvaried, dvaried = varied_cfg_list
wbia.algo.hots.requery_knn module
- class wbia.algo.hots.requery_knn.TempQuery(vecs, invalid_axs, get_neighbors, get_axs)[source]
Bases:
utool.util_dev.NiceRepr
queries that are incomplete
- class wbia.algo.hots.requery_knn.TempResults(index, idxs, dists, validflags)[source]
Bases:
utool.util_dev.NiceRepr
- wbia.algo.hots.requery_knn.requery_knn(get_neighbors, get_axs, qfx2_vec, num_neighbs, invalid_axs=[], pad=2, limit=4, recover=True)[source]
Searches for num_neighbs, while ignoring certain matches. K is increassed until enough valid neighbors are found or a limit is reached.
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.neighbor_index import * # NOQA >>> import wbia >>> qreq_ = wbia.testdata_qreq_(defaultdb='testdb1', a='default') >>> qreq_.load_indexer() >>> indexer = qreq_.indexer >>> qannot = qreq_.internal_qannots[1] >>> qfx2_vec = qannot.vecs >>> ibs = qreq_.ibs >>> qaid = qannot.aid >>> impossible_aids = ibs.get_annot_groundtruth(qaid, noself=False) >>> invalid_axs = np.array(ut.take(indexer.aid2_ax, impossible_aids)) >>> pad = 0 >>> limit = 1 >>> num_neighbs = 3 >>> def get_neighbors(vecs, temp_K): >>> return indexer.flann.nn_index(vecs, temp_K, checks=indexer.checks, >>> cores=indexer.cores) >>> get_axs = indexer.get_nn_axs >>> res = requery_knn( >>> get_neighbors, get_axs, qfx2_vec, num_neighbs, invalid_axs, pad, >>> limit, recover=True) >>> qfx2_idx, qfx2_dist = res >>> assert np.all(np.diff(qfx2_dist, axis=1) >= 0)
- Ignore:
>>> from wbia.algo.hots.neighbor_index import * # NOQA >>> from wbia.algo.hots.requery_knn import * # NOQA >>> max_k = 9 >>> n_pts = 5 >>> num_neighbs = 3 >>> temp_K = num_neighbs * 2 >>> # >>> # Create dummy data >>> rng = np.random.RandomState(0) >>> tx2_idx_full = rng.randint(0, 10, size=(n_pts, max_k)) >>> tx2_idx_full[:, 0] = 0 >>> tx2_dist_full = np.meshgrid(np.arange(max_k), np.arange(n_pts))[0] / 10 >>> tx2_dist_full += (rng.rand(n_pts, max_k) * 10).astype(np.int) / 100 >>> qfx2_vec = np.arange(n_pts)[:, None] >>> vecs = qfx2_vec >>> # >>> pad = 0 >>> limit = 1 >>> recover = True >>> # >>> invalid_axs = np.array([0, 1, 2, 5, 7, 9]) >>> get_axs = ut.identity >>> # >>> def get_neighbors(vecs, temp_K): >>> # simulates finding k nearest neighbors >>> idxs = tx2_idx_full[vecs.ravel(), 0:temp_K] >>> dists = tx2_dist_full[vecs.ravel(), 0:temp_K] >>> return idxs, dists >>> # >>> res = requery_knn( >>> get_neighbors, get_axs, qfx2_vec, num_neighbs, invalid_axs, pad, >>> limit, recover=True) >>> qfx2_idx, qfx2_dist = res
wbia.algo.hots.scoring module
- wbia.algo.hots.scoring.get_name_shortlist_aids(daid_list, dnid_list, annot_score_list, name_score_list, nid2_nidx, nNameShortList, nAnnotPerName)[source]
- CommandLine:
python -m wbia.algo.hots.scoring –test-get_name_shortlist_aids
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.scoring import * # NOQA >>> daid_list = np.array([11, 12, 13, 14, 15, 16, 17]) >>> dnid_list = np.array([21, 21, 21, 22, 22, 23, 24]) >>> annot_score_list = np.array([ 6, 2, 3, 5, 6, 3, 2]) >>> name_score_list = np.array([ 8, 9, 5, 4]) >>> nid2_nidx = {21:0, 22:1, 23:2, 24:3} >>> nNameShortList, nAnnotPerName = 3, 2 >>> args = (daid_list, dnid_list, annot_score_list, name_score_list, ... nid2_nidx, nNameShortList, nAnnotPerName) >>> top_daids = get_name_shortlist_aids(*args) >>> result = str(top_daids) >>> print(result) [15, 14, 11, 13, 16]
- wbia.algo.hots.scoring.make_chipmatch_shortlists(qreq_, cm_list, nNameShortList, nAnnotPerName, score_method='nsum')[source]
Makes shortlists for reranking
- CommandLine:
python -m wbia.algo.hots.scoring –test-make_chipmatch_shortlists –show
Example
>>> # ENABLE_DOCTEST >>> from wbia.algo.hots.scoring import * # NOQA >>> ibs, qreq_, cm_list = plh.testdata_pre_sver('PZ_MTEST', qaid_list=[18]) >>> score_method = 'nsum' >>> nNameShortList = 5 >>> nAnnotPerName = 6 >>> # apply scores >>> score_chipmatch_list(qreq_, cm_list, score_method) >>> cm_input = cm_list[0] >>> #assert cm_input.dnid_list.take(cm_input.argsort())[0] == cm_input.qnid >>> cm_shortlist = make_chipmatch_shortlists(qreq_, cm_list, nNameShortList, nAnnotPerName) >>> cm_input.print_rawinfostr() >>> cm = cm_shortlist[0] >>> cm.print_rawinfostr() >>> # should be sorted already from the shortlist take >>> top_nid_list = cm.dnid_list >>> top_aid_list = cm.daid_list >>> qnid = cm.qnid >>> print('top_aid_list = %r' % (top_aid_list,)) >>> print('top_nid_list = %r' % (top_nid_list,)) >>> print('qnid = %r' % (qnid,)) >>> rankx = top_nid_list.tolist().index(qnid) >>> assert rankx == 0, 'qnid=%r should be first rank, not rankx=%r' % (qnid, rankx) >>> max_num_rerank = nNameShortList * nAnnotPerName >>> min_num_rerank = nNameShortList >>> ut.assert_inbounds(len(top_nid_list), min_num_rerank, max_num_rerank, 'incorrect number in shortlist', eq=True) >>> ut.quit_if_noshow() >>> cm.show_single_annotmatch(qreq_, daid=top_aid_list[0]) >>> ut.show_if_requested()
- wbia.algo.hots.scoring.score_chipmatch_list(qreq_, cm_list, score_method, progkw=None)[source]
- CommandLine:
python -m wbia.algo.hots.scoring –test-score_chipmatch_list python -m wbia.algo.hots.scoring –test-score_chipmatch_list:1 python -m wbia.algo.hots.scoring –test-score_chipmatch_list:0 –show
Example
>>> # SLOW_DOCTEST >>> # xdoctest: +SKIP >>> # (IMPORTANT) >>> from wbia.algo.hots.scoring import * # NOQA >>> ibs, qreq_, cm_list = plh.testdata_pre_sver() >>> score_method = qreq_.qparams.prescore_method >>> score_chipmatch_list(qreq_, cm_list, score_method) >>> cm = cm_list[0] >>> assert cm.score_list.argmax() == 0 >>> ut.quit_if_noshow() >>> cm.show_single_annotmatch(qreq_) >>> ut.show_if_requested()
Example
>>> # SLOW_DOCTEST >>> # (IMPORTANT) >>> from wbia.algo.hots.scoring import * # NOQA >>> ibs, qreq_, cm_list = plh.testdata_post_sver() >>> qaid = qreq_.qaids[0] >>> cm = cm_list[0] >>> score_method = qreq_.qparams.score_method >>> score_chipmatch_list(qreq_, cm_list, score_method) >>> assert cm.score_list.argmax() == 0 >>> ut.quit_if_noshow() >>> cm.show_single_annotmatch(qreq_) >>> ut.show_if_requested()
wbia.algo.hots.toy_nan_rf module
- wbia.algo.hots.toy_nan_rf.main()[source]
- SeeAlso:
python -m sklearn.ensemble.tests.test_forest test_multioutput
- CommandLine:
python -m wbia toy_classify_nans python -m wbia toy_classify_nans –toy1 –save “rf_nan_toy1.jpg” –figsize=10,10 python -m wbia toy_classify_nans –toy2 –save “rf_nan_toy2.jpg” –figsize=10,10 python -m wbia toy_classify_nans –toy2 –save “rf_nan_toy3.jpg” –figsize=10,10 –extra python -m wbia toy_classify_nans –toy2 –save “rf_nan_toy4.jpg” –figsize=10,10 –extra –nanrate=0 python -m wbia toy_classify_nans –toy2 –save “rf_nan_toy5.jpg” –figsize=10,10 –nanrate=0
Example
>>> # DISABLE_DOCTEST >>> result = toy_classify_nans()
- wbia.algo.hots.toy_nan_rf.toydata1(rng)[source]
Description of Plot
You’ll notice that there are 4 plots. This is necessary to visualize a grid with nans. Each plot shows points in the 2-dimensional grid with corners at (0, 0) and (40, 40). The top left plot has these coordinates labeled. The other 3 plots correspond to the top left grid, but in these plots at least one of the dimensions has been “nanned”. In the top right the x-dimension is “nanned”. In the bottom left the y-dimension is “nanned”, and in the bottom right both dimensions are “nanned”. Even though all plots are drawn as a 2d-surface only the topleft plot is truly a surface with 2 degrees of freedom. The top right and bottom left plots are really lines with 1 degree of freedom, and the bottom right plot is actually just a single point with 0 degrees of freedom.
In this example I create 10 Gaussian blobs where the first 9 have their means laid out in a 3x3 grid and the last one has its mean in the center, but I gave it a high standard deviation. I’ll refer to the high std cluster as 9, and label the other clusters at the grid means (to agree with the demo code) like this:
` 6 7 8 3 4 5 0 1 2 `
Looking at the top left plot you can see clusters 0, 1, 2, 4, 6, and 8. The reason the other cluster do not appear in this grid is because I’ve set at least one of their dimensions to be nan. Specifically, cluster 3 had its y dimension set to nan; cluster 5 and 7 had their x dimension set to nan; and cluster 9 had both x and y dimensions set to nan.
For clusters 3, 5, and 7, I plot “nanned” points as lines along the nanned dimension to show that only the non-nan dimensions can be used to distinguish these points. I also plot the original position before I “nanned” it for visualization purposes, but the learning algorithm never sees this. For cluster 9, I only plot the original positions because all of this data collapses to a single point [nan, nan].
Red points are of class 0, and blue points are of class 1. Points in each plot represent the training data. The colored background of each plot represents the classification surface.
Module contents
- wbia.algo.hots.IMPORT_TUPLES = [('_pipeline_helpers', None), ('chip_match', None), ('exceptions', None), ('hstypes', None), ('match_chips4', None), ('name_scoring', None), ('neighbor_index', None), ('neighbor_index_cache', None), ('nn_weights', None), ('old_chip_match', None), ('pipeline', None), ('query_params', None), ('query_request', None), ('scoring', None)]
Regen Command: cd /home/joncrall/code/wbia/wbia/algo/hots makeinit.py –modname=wbia.algo.hots
- wbia.algo.hots.reassign_submodule_attributes(verbose=True)[source]
why reloading all the modules doesnt do this I don’t know
- wbia.algo.hots.rrrr(verbose=True)
Reloads wbia.algo.hots and submodules